Systems thinking and the nature of reality
Complexity Labs In my last post I made use of a concept map of linear management, which I had made in January 2013. It was fairly neat and simple and all that with a lot of explanatory power. I used it as a contrast to non-linear management, especially of wicked problems. It satisfied me at the time, but it did not answer all questions. Linear management is one thing, but does that mean there are linear and non-linear systems? Or linear and non-linear problems? What about causality and correlation in complex situations? How exactly should we understand them? I finally googled ‘causality’ and ‘correlation’ in Youtube, which brought me to a ComplexityLabs video. I happen to have a fetish for word combinations for 14 letters, so ComplexityLabs (about) was right for me. They have their own video channel, which I highly recommend (here). Why would I do that? Well, I watched two video’s (causality and non-linear causality), got to the bottom of them using concept maps, and was very happy with what I had learned.
Gordian knot One of the reasons why systems thinking is hard to grasp is because it has so many facets and these facets are intertwined. Systems learning is about problematic situations or wicked problems, but is itself also a wicked problem. It is like cutting the Gordian Knot. Of course most learning is like that, although systems learning may be more so than some other fields of learning. It is often said that the best way of learning is by doing or playing. Which is why Bob Williams and I wrote Wicked Solutions. Once you have some practical insights of the knot’s innards you may attempt to untie it. This still applies, but you can also read on and see what I made of the whole knotty business.
Causality What follows here is a description of the red-framed top part of my concept map (below) of a ComplexityLabs video (here), which you may like to watch first. It will only take 10 minutes of your time and is quite entertaining. Linear causality is a way of describing how humans experience and create change in its more common, linear form. What we observe is that one or more events are unidirectionally followed by one or more other events. If this happens regularly, we call the first event ‘cause’ and the ensuing event ‘effect’, but philosophers tend to disagree on the reality of the concept of causality. Plato says that without causality there is nothing, whereas Hume says causality is all make-believe. Kant turned it into one of his twelve a priori categories of human understanding. “For many years both scientists and statisticians were reluctant to even say the word ‘causation’.” (Kenny 2004, 3). Judea Pearl, perhaps better known as the father of the beheaded Wall Street Journal bureau chief Daniel Pearl, resolved this by positing three conditions for causality: 1. time precedence; 2. relationship; and 3. non-spuriousness. The first is simple, the second involves statistics and all the associated boundary conditions to make it work, the third (non-spuriousness) is all about the difficulty of avoiding confounding variables, also known as the “third variable problem.” A third, unknown or ignored, factor may be the actual cause of a relationship: so the sales of sunglasses and icecream are correlated, but the cause is nice, sunny weather.
Determinism A fundamental, paradigmatic problem with linear causality is its necessary tendency to reductionism: “any theory or method that holds that a complex idea, system, etc, can be completely understood in terms of its simpler parts or components” (Collins English Dictionary). This leads rather naturally to (a belief in) determinism: “the philosophical doctrine that all events including human actions and choices are fully determined by preceding events and states of affairs, and so that freedom of choice is illusory.” So, indirectly, the emphasis on linear causality leads to two things: a belief in the undisputable facts of science. This is of little consequence in the physical world, where science and technology have worked great wonders by using a controlled, closed-system approach. It becomes quite controversial in the social world of open systems. Which brings is to the blue-framed, bottom part of the concept map, which deals with non-linear causality. You may prefer to watch the ComplexityLabs video on non-linear causality first (here).
Non-linear causality …. occurs when event(s) interact bi-directionally with each other. A key characteristic non-linear causality is feedback, which comes in three forms: 1. self-reinforcing loops; 2. micro-macro dynamics; and 3. reverse causation. The first one is the best known. Self-reinforcing (as well as stabilizing) loops occur in causal loop diagrams and system dynamics. They can lead to disproportional outcomes (e.g. the well-known butterfly effect), which in turn leads to indeterminism. Indeterminism is enhanced by the fact that non-linear causality is typical of open systems, which – provided they are sufficiently complex – have an almost infinite number of seemingly insignificant initial situations that may trigger an equally infinite number of self-reinforcing processes.
Inter-relationships, perspectives, boundaries This is the ‘war cry’ or ‘haka’ of the Wellingtonian Bob Williams, who explored the potential of these three core systems concepts in two books, including Systems concepts in action, and many workshops (see e.g. here, where he explains the origins of the three concepts, and here). The thee concepts can be found in the top-left purple frame within the blue non-linear causality frame. What we see is that open systems (including social systems) lack a definitive boundary. We don’t know what is ‘in’ and what is ‘out’ (e.g. event D). This can only be settled by an inquiry in which multiple perspectives debate their perception of the open system, which is best understood as a whole of non-linear causal inter-relationships. Since feedback is an essential characteristic of these inter-relationships, debate participants must look into self-reinforcing processes, micro-macro dynamics and instances of reverse causation. These are in turn linked to setting the right goals and other patterns involved in downward causality. Because of the principle of equifinality, pathways to achieve these goals are not fixed. Since debate (or dialectic or critique) is the basic way in which open systems (including wicked problems) are to be managed, then Bob’s ‘haka’ concepts are a good way of circumscribing its operationalization.
On the nature of causality … is the title of a marvellous presentation (here) by George F.R. Ellis, emeritus distinguished professor of complex systems from Capetown, South Africa, which he delivered during the 16th Kraków Methodological Conference “The Causal Universe” on May 17-18, 2012. In it he distinguishes five types of downward causation: (1) algorithmic top-down causation; (2) top-down causation via non-adaptive information control; (3) top-down causation via adaptive selection (evolution); (4) top-down causation via adaptive information control; and (5) intelligent top-down causation (the human mind). To explain this in simple terms Ellis (and ComplexityLabs, too) uses Russell Ackoff’s aircraft metaphor by asking ‘why does it fly?’ There are four explanations (i) the bottom up view: Bernouilli’s theorem; (ii) the top down view: it was designed that way; (iii) the same level view: the pilot is flying according to a timetable; and (iv) the topmost view: it is profitable (or people want/need to fly etc.). The key point is that simultaneous multiple causality (inter-level, as well as within each level) is always in operation in complex systems.
Alexander the Great … was taught by Aristotle, so he may have learned some of these ideas about causality. Aristotle distinguished four causes (a) material: that out of which’; (b) formal: ‘the form’, ‘the account of what it-is-to-be’; (c) efficient: ‘the primary source of the change or rest’; and (d) final: ‘the end, that for the sake of which a thing is done.’ Ellis contends that Aristotle’s categorization can be adapted to correspond exactly to the four explanations why the plane flies (material->Bernouilli, formal->design, efficient->timetable, final->profit). If Alexander really cut the Gordian Knot, he was spoiling the clue. But maybe he didn’t: according to some, Alexander pulled the knot out of its pole pin, exposing the two ends of the cord and allowing him to untie the knot without having to cut through it. We will never know, but I sure hope so. There is much that the world owes to Hellenism and its efforts to disentangle the world’s complexity in a rational way. Where would we be without free trade, great libraries, free speech and free thought? I am pretty sure we wouldn’t have systems thinking as a discipline and as a tool for better understanding the world, politics, business and humankind itself.
Reblogged this on Systems Community of Inquiry.