Systems thinking is generally considered difficult, both to learn and to explain what it is about. Here is the latest of my efforts in this blog to make it simple. At the end is a concept map. It is self-explanatory, but only if you read the table above it very carefully. This post could be considered a follow-up to the previous one. It has been syndicated by The Systems Community of Inquiry to https://stream.syscoi.com, the global network of systems thinkers, scientists and practitioners.
The three steps …. are: (1) recognizing that some problems are socially and organizationally complex; (2) acquiring some basic knowledge of systems thinking, social systems thinking in particular; and (3) selecting one or more systems approaches to address the complex problem, at first arguably a generic systems method such as Churchman’s dialectical systems approach. The numbering is arbitrary: the three form what could be called the systems learning cycle, in which the three steps are interdependent. So, one needs some idea about social systems thinking in order to recognize the characteristics of socially and organizationally complex problems as requiring social systems thinking. And there is hardly a point in recognizing such complexity without having some confidence that specific systems approaches could be of some use. In practice one will need to go through the learning cycle a couple of times, before it all starts making good sense. (N.B.: I am convinced that the highly generic, dialectical systems approach of Churchman (1968, 1971, 1979) is a very good starting point for both learning and problem solving purposes).
Complex problems 101 Warren Weaver (1948, link in references below) was the first to recognize the need for a new class of problems, which he called ‘problems of organized complexity’. Their key characteristic is the fact that “they are interrelated into an organic whole,” which means that they cannot be analysed in their system-holistic essence quantitatively. Now more than 70 years ago he insisted that mankind must find some way of handling these problems, because “the future of the world depends on many of them.” About ten years later, in 1957, Herbert Simon, who was awarded the 1978 Nobel Prize in Economics, identified what he called ‘ill structured’ problems.” “In short, well-structured problems are those that can be formulated explicitly and quantitatively, and that can then be solved by known and feasible computational techniques” (or algorithms), whereas ill structured problems cannot. He went on to speculate that computers could be programmed to develop enough artificial intelligence (AI) to be able to handle ill structured problems better than any human decision-makers and managers. Since then, AI developed much slower than expected, so another 60 or so years later, Stephen Hawking agreed with Simon, in theory (!), but also warned that artificial intelligence could pose an existential threat to mankind (Russell et al. 2015). If it is true that artificial intelligence is the solution to mankind’s complex problems, then its application actually would seem to present a new, highly complex problem of its own.
Wicked problems In 1972 (and 1973), Horst Rittel described in detail what he had called wicked problems in one of the weekly seminars of C. West Churchman at Berkeley in 1967. It was to admit that the use of computer technology to manipulate large numbers of variables in order to solve social problems such as urban renewal, environmental protection, the global food system, health services, and the prison and law enforcement systems had led to very disappointing results. Rittel lists eleven characteristics in 1972 and ten in 1973 and shows why these characteristics prevent the successful application of computer technology. This does not mean that computers cannot be very useful in some subordinate way, but they will never be able to crack the hard, wicked core of wicked problems in a convincingly satisfactory way. The ten differences between wicked and tame problems are summarized below in two forms: first a table, then a concept map. I let them speak for themselves.
- Churchman, C. W. (1968). The systems approach. New York: Delta. Retrieved here or here.
- Churchman, C. W. (1979). The systems approach and its enemies. New York, London: Basic Books. Retrieved here or here (chapter abstracts) or here (summaries).
- Churchman, C. West (1971). The design of inquiring systems: basic concepts of systems and organization. New York, London: Basic Books. Retrieved here.
- Rittel, H. & Webber, M. (1973) Dilemmas in a General Theory of Planning, Policy Sciences 4 (1973), 155-169. Retrieved here.
- Rittel, H. (1972) On the Planning Crisis: Systems Analysis of the ‘First and Second Generations’, Bedriftsøkonomen nr. 8 – 1972, 390-396. Retrieved here.
- Russel, S. et al. (2015) Research Priorities for Robust and Beneficial Artificial Intelligence, AI Magazine (Winter 2015), 105-114. Retrieved here.
- Simon, H. & Newell, A. (1958). Heuristic Problem Solving: The Next Advance in Operations Research. Operations Research, 6(1), 1-10. Retrieved here.
- Weaver, W. (1948) Science and Complexity, American Scientist, 36: 536. Retrieved here.
This was a short explanation of the first step (or the second, if you like) in the systems learning cycle. In a previous post I gave a description of the second step to explain the need for social or soft systems thinking. In the next post I will discuss the dialectical systems approach. A simple, dialectical method for learning how to handle inter-relationships, perspectives, and boundaries (see concept map) can be found in Wicked Solutions. You can support my work (of writing an even more convincing sequel, of which this post is a part) by buying Wicked Solutions at Amazon.com. You will support me even more if you buy at Lulu’s. There is also a PDF at Gumroad for only $12. Your thinking will never be the same.