A systems approach to the design of inductive inquiry

The problem of observation as a nonseparable component

In the late 1960s Churchman was invited by NASA to work on an automated organic chemistry laboratory (or ‘inquiring system’), presumably to detect possible traces of life (‘biosignatures’ in current Exomars terms). The laboratory was to land on the Mars surface and do its work all on its own using a mass spectrometer. Churchman’s contribution was to help design a suitable inquiring logic. Churchman later used the results to illustrate how a Leibnizian inquirer could be used in practice for more ‘down-to-earth’ purposes. It was important that the example would be neither too simple (to contain all the elements for a proper explanation), nor too complex (to be understood by the average reader). What follows now is a summary of Chapter 4 of the Design of Inquiring Systems (Churchman, 1971). An earlier version of the chapter is entitled “On the design of inductive systems: some philosophical problems” (Br J Philos Sci (1969) 20 (4): 311-323 doi:10.1093/bjps/20.4.311). It seems that Churchman’s work with NASA was not always well appreciated. Probably NASA scientists couldn’t or wouldn’t understand what he was talking about, as if he was from another planet (Mars?). That’s interesting because it was all about Mars, anyway.

The importance of induction      Induction is commonly understood as “The process of inferring a general law or principle from the observation of particular instances” (Oxford English Dictionary Online, accessed October 28, 2016). That is way too narrow. Carnap (1952) takes inductive inference to include all non-deductive inference. That may be a bit broad. It is important to note that deduction is not very practical: it can never support contingent judgments such as meteorological forecasts, nor can deduction alone explain the breakdown of one’s car, discover the genotype of a new virus, or reconstruct fourteenth century trade routes. Inductive inference can do these things more or less successfully because, in Peirce’s phrase, inductions are ampliative (i.e. adding to what is already known), whereas deduction only orders and rearranges our knowledge without adding to its content. We may conclude that induction is of crucial practical importance. The question then becomes: What distinguishes good from bad inductions? This is also known as The Problem of Induction. Churchman combined philosophy and management science to lift this question from the theoretical realm of philosophy to the practical world of planning and decision-making. Unfortunately, what he had to say about good induction was by and large lost in misinterpretations and a lack of practical approaches for its effective application. It is the business of this blog (not just this post) to bring “Churchman” back to life.

The induction process 1.0     In chapter 4 of the Design of Inquiring Systems Churchman discusses a “systems approach” to the design of inductive systems. The term “systems approach” can be applied because he applies system teleology to the design of inductive systems (see my post on Goal seeking by the systems approach). But first we must agree on what is meant by induction, so let’s take a look at below concept map (which is composed of 3 sub-maps that are intended to make sure you don’t lose track), in particular the right upper sub-map entitled “Induction process 1.0”. The main thing induction does here is to find and confirm a hypothesis (which could also be a plan or a project, since they are hypothetical solutions to the problem of planned human activity) to explain data (facts, observations, ‘reality’) using existing theory and background information. If it fails to do so, the process must be repeated. It is important to note that extralogical considerations enter into the design choices. Ideas about possible solutions and about the conditions of the situation enter into the process. There are no rigorous or mechanical procedures to find hypotheses from the data. Those involved look at clues, make guesses, and reject hypotheses in typically undefined ways. Could these intuitive moves be made explicit?

induction-using-a-leibnizian-inquirer

The induction process 2.0     … is the formalization of a more ‘intuitive’ design of the induction process using extralogical considerations. Its inspiration comes from Leibniz. For the sake of comparison I included a simple version of the Leibnizian inquirer (see my post on Leibnizian inquiring systems for the full version). What we see is ‘contingent truths’ (i.e. data, facts, observations), that must be fitted in a wider context (‘fact nets’ formed by existing theory, background information and other continent truths) in a sensible way (i.e. seeking coherence, simplicity, elegance etc.) using an imaginative (i.e. creative, not purely logical) process. The 2.0 process has eight components (see numbers), four of which involve key sub-processes: (2) (limitative) adjustment of data, facts etc. in the (deductive) light of existing theory; (3) (limitative) identification of classes of hypotheses; (4/5) the construction (i.e. fact net growth) and limitative refinement of hypotheses; and (6/7) the limitative validation of plausible hypotheses by verifying their predictive power. For a slightly more detailed step-by-step (or, in fact, component-by-component) description, see the section entitled “Implications for Wicked Solutions” below.

Some comments       Component 1: a complete design must specify how to collect data and what data to collect. The choice of technique, when to stop collecting data, under what conditions not to collect etc. must be made explicit. Component 2: the system is not passive because it sorts out the uninformative and unreliable data and finds the most significant. This component is not separable from the hypothesizing at 5. Further partitioning of the observations may be useful, which allows the addition of more background statements by experts to develop better hypotheses. Component 3 looks at class level which classes could account for certain data, whereas component 5 looks at specific hypotheses rather than classes. At component 4 experts are asked to provide expertise to enable the reduction of the number of specific plausible hypotheses. At component 5 it is often best to use a mixture of logical (relevance-based) and extralogical (intuition- or experience-based) methods to limit the hypothesizing task. Besides, a hypothesis cannot be simply based on a series of successful observations, there must also be an adequate explanation or theory to select one explanation among alternative ones. The components interact in the sense that more time and effort could be invested in either 3 and 4 (e.g. leaving only a single hypothesis) or in 6 and 7 (considering all possible hypotheses as plausible instead of abandoning whole classes), with much depending on the amount of time available or other efficiency or risk considerations. Components 6 and 7 test each candidate hypothesis. At first sight this requires simply comparing the actual with the predicted results. Ideally, this would require a perfect theory, but if that were available, the inquirer would directly list the plausible hypotheses at 3. In the absence of such a theory a balance must be struck between relevant data and expert theory vs. overloading the system, often involving re-examination of components 2 and 4. At component 7, the problem of assessing the candidates may be complicated by the question what the “actual” data “are”. In theory, that’s easy, but in practice this may require complex statistics. Alternatively, in practice, a relatively simple scoring system may be used to quickly reject unlikely candidates.

Reiteration      At component 8 it would be useful to have some history of the weakest links in the total process, so that in the recycling the inquirer could systematically change them. An experienced expert with a “feel” for what may have gone wrong may help achieve the same without such a ‘history’. Yesterday (the 27th of October, 2016) it was reported that the Europe’s Mars lander (article in Dutch) has probably crashed due to software problems. The claim is that the retrorockets had fired only 3 seconds, a little time before the lander hit the Mars surface at a speed of 300 km/hour. This seems to suggest the software may have mistaken the distance to the surface by a factor 1000, i.e. thinking it was 3 metres instead of 3 kilometres. I must say that I am not convinced. There must at least be some physical, or rather digital proof of this software mistake. It is this type of process of reasoned assessment that Churchman was attempting to describe.

nine-conditions-of-teleolgical-system

Teleology    To examine whether the inquiry program could be considered a goal-oriented system, we must check the 9 conditions for this to be true and examine the implications. For starters: (i) does it have goal: yes, because it makes choices to find the most “satisfactory” hypothesis (solutions etc.) so it is teleological; (ii) it lacks a measure of performance, simply because it seeks the “truth”, no matter what the cost (not an issue for NASA in the 1960s). This implied that it lacked benefit-cost considerations that enable optimization of design choices between components 3,4 and components 6,7; (iii) in the NASA case the main client was NASA and the broader public, that wished to know more about the surface of Mars and possible “biosignatures” (half a century before the current Exomars programme), hence the interest in an automated organic chemistry laboratory, because organic chemistry is too involving to send instructions across 200 million kilometres. Interestingly, Churchman writes: “One senses that the extra-chemistry clientele may have some conflicts with the clientele within chemistry.” One can only guess how a similar conflict may have affected the landing performance of the Schiaparrelli probe; (iv and v) scientific method with its habits of logic and statistical inferences seems to be unmethodical in its efforts to design an effective research system; (vi) decision-making scientists and administrators are remarkably similar in their preference for pluralistic (i.e. non-systemic, ´empirical´) planning (or intervention design), which implies that no observer (including themselves) can explain how their decisions are made (see conflict above). (vii) and (viii) in the example, the designer and his intentions were closely related to those of NASA and its public (ix) at the guarantor-implementation level we may ask what it is that the designer ultimately seeks to achieve, in the narrow sense or in the broadest sense. The narrow sense is not unimportant because it clarifies how the battle between competing fact nets will eventually be won by one and only one net. This is also the place where we can ask what a key tool tries to achieve, what is its function and how some more or less realistic or imaginable maximization or minimization of this function would affects its design.

Enemies of the systems approach     Churchman’s experience with attempting to implement the systems approach in organizational management resulted in the following insights: (a) there is general resistance to the methodical; (b) systems analysis (esp. CP, “critical path”, and PERT, “program evaluation and review technique”) filled the gap (for unclear reasons); (c) CP and PERT “caught on” in a way similar to rumours, bringing its own convictions; (d) misapplications started to abound, because no justification (other than “everybody does it”) was required; (e) this is very similar to the spread of statistical techniques in chemistry, biology and the social sciences, i.e. significance tests and correlations are run “with no end in view except to run them.” A strong warning is in place: “If the pluralist’s resistance to rationality is so strong, then it is an aspect of the system that the designer needs to understand.” Following up on his own advice, Churchman wrote a final book on the systems approach in 1979: “The systems approach and its enemies.” In it he expands the 9 conditions (or categories) for conceiving human activity as a system to 12, with the enemies as the central concern, the systems thinker as issue of role, and world view as issue of assurance (see also Goal seeking by the systems approach).

What have we learned?      The following systems perspectives can be distinguished: A. that of the systems approach to assess the goal-seeking rationality of a human activity (e.g. policy, project, enterprise) generally; B. (more specifically) that of the systems approach to assess the goal-seeking rationality of an inquiring system (which is a special case of a human activity) as a whole or of a component; and C. (even more specifically) that of the systems approach to assess the goal-seeking rationality of the components of the inquiring system involved in constructing and refining hypotheses, plans, and projects (i.e. components 4 and 5).

Implications for Wicked Solutions        It is interesting to note that the inquiring system of the systems approach of Wicked Solutions more or less follows the components of an induction process using a Leibnizian inquirer according to Churchman: (1) collect data (resembles “rich picturing” to sweep in inter-relationships); (2) adjust data in light of current theory (resembles stakeholder analysis). In fact this is a form of deduction. In other word, deduction can be said to be part of certain forms of induction; (3) suggest classes which contain plausible H by looking at most significant features of data (resembles “framing” in “Wicked solutions”); (4) suggest further limitations on hypotheses after consulting outside experts or the results of other tests, i.e. increase the theoretical base (resembles “boundary critique”); (5) construct plausible hypotheses using 1 to 4 (resembles “intervention design”); (6) make predictions for each candidate hypothesis (could be done in the case of more than one “intervention design”); (7) assign degrees of satisfactoriness to the candidate using 6 (e.g. sustainability, equity, impact etc.); and (8) recycle if no hypothesis is “satisfactory enough.” It is not obvious that the whole system performance will increase if the measures of performance of each of these components increase, but we may assume for the moment it does. It is clear that any future theoretical treatment of the systems approach of Wicked Solutions will need to look into its rationality from a teleological point of view.

Advertisements

About Sjon van ’t Hof

Development professional who worked in rural development, tropical agriculture, and irrigation development in Chad, Zambia, Mali, Ghana, Mauritania, Israel, Burkina Faso, Niger, and the Netherlands in capacities ranging from project design and management to information management. Conducted missions to India, China, Kenya, and Bangladesh. Experience in the development and delivery of trainings in irrigation equipment selection, information literacy, Internet searching and database searching. Explores systems thinking in relation to international development, education, and management, with an ever stronger focus on the systems approach of C. West Churchman. Knowledgeable in tropical agriculture, project design and development economics, agricultural mechanization, irrigation, plant pathology, environmental degradation and protection, rural development. Co-authored "Wicked Solutions: a systems approach to complex problems", a book written by Bob Williams and Sjon van 't Hof. It was published in June 2014 and provides a practical way of dealing with wicked problems. Wicked problems are complex, ill-structured, human problem situations. This book will help you design an inquiry and intervention in such messy, wicked situations. It does so by guiding you through the steps and stages of a systemic process that addresses your own wicked problem. For more information, see https://csl4d.wordpress.com/ or http://www.bobwilliams.co.nz/Systems_Resources.html
This entry was posted in General. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s