Impact evaluation, innovation and systemic learning

Why can’t evaluation be more systemic?

In this blog post, I will: (1) summarize two contributions at a recent IDS workshop on systemic evaluation; (2) interpret a novel 3-dimensional framework that links key systems concepts with learning and value types; (3) discuss three reasons (ignorance, loss of control, presumed lack of practicality) why more systemic evaluation  approaches are ignored; and (4) suggest ways out of this systemic conundrum, e.g. by taking the 5 E’s (efficacy, efficiency, effectiveness, ethics, and elegance) more seriously.

Systemic approaches to evaluation and impact     IDS Bulletin 47.1 (Introduction: Barbara Befani, Ben Ramalingam and Elliot Stern, 2015) presents the implications for evaluation practice of the recognition that development occurs within complex systems. These were discussed in March 2013 at a workshop entitled ‘Impact, innovation and learning: towards a research and practice agenda for the future‘, held at the Institute of Development Studies. Development and its evaluation are intricately linked and have to satisfy increasingly complex demands in order to demonstrate effectiveness. Widely accepted, non-systemic approaches such as ‘logframe,’ can no longer be considered adequate. The operationalization of systemic approaches requires both methodological and institutional innovations. Suggested methodological changes include: (1) adopting double- and triple-loop learning to address the effectiveness issue at higher levels of understanding (Hummelbrunner); (2) analysing systems boundaries (‘scale’), components, and emergent interactions to broaden the questions asked by impact evaluations (Garcia and Zazueta); and (3) using system dynamics to handle multiple factors that influence the outcome in indirect and non-linear ways (Derwisch and Löwe; Grove). Suggested institutional changes include: (4) applying the critical systems concepts of boundaries and perspectives to the entire intervention system (Williams, see below); and (5) transforming the ‘iron triangle’ of the current ‘evaluation industrial complex’ to an ‘evaluation adaptive complex’ with more than nominal self-awareness of bias and with expanded notions of rigour and responsibility (Reynolds). [CSL4D abstract; full article]

Can impact evaluation adopt systems ideas?     (Williams, 2015) Now that the complexity (SH: and a concomitant lack of effectiveness) of interventions has been more widely acknowledged, evaluation approaches are needed that are able to address questions about their legitimacy, validity, relevance and usefulness. This article explores how the systems field can address these issues in ways that conventional impact evaluation cannot. To do so, the systems field is conceptualised as going beyond simply a better understanding of interrelationships, by engaging with multiple stakeholder perspectives, and reflecting on where systemic boundaries are drawn in terms of those interrelationships and perspectives. These boundary choices are considered the result of value judgments based on the perspectives of the stakeholders involved, including the intervention funders, designers, implementer, and evaluation commissioners. The right balance between these three elements – interrelationships, perspectives and boundaries – is critical. An emphasis on interrelationships is likely to bring only superficial (prosaic) benefits to impact evaluation. On the other hand, a strong emphasis on perspectives and boundaries could result in profound changes to the way in which impact evaluation is conceived and delivered. In particular, it could change the nature of the relationship between the evaluator and key stakeholders such as funders and managers of interventions. [Journal abstract, CSL4D edited; full article]

An exploratory framework to improve coherence    (Hummelbrunner, 2015) This article gives an overview of hierarchized learning theories by Bateson (Learning 0-IV) and by Argyris and Schön (single-, double-, and triple-loop learning), discusses options for interpreting them in diverging terms (e.g. Senge´s adaptive and generative learning), and their implications for learning types and (systemic) evaluation approaches. The author goes on to develop a novel interpretation, in which the three core systems concepts of interrelationships, perspectives and boundaries are linked to single-, double-, and triple-loop learning, respectively. Then a similar typology for values (instrumental, intrinsic, and critical) is presented, together with their likely correspondence with learning types and systems concepts. Finally, Hummelbrunner combines the three systems concepts, the three types of learning, and the three types of value in a comprehensive exploratory framework. These three aspects are usually dealt with separately in evaluation assignments, although they mutually influence each other or can be seen as complementary. This innovative three-dimensional framework is proposed for exploring and reflecting on the connections between the three systems concepts with learning and values in order to develop a better understanding and management of evaluation tools. [CSL4D abstract; full article]

Importance of Hummelbrunner’s exploratory framework       I have been struggling to fully and profoundly understand the workings of Bob William’s critical systems approach for systemic intervention design and evaluation (see above abstract) ever since I was lucky enough to learn about it in October 2012. Understanding and applying are two different things here: it is possible to apply his approach without understanding its workings, as long as you know (or understand) what to do in each successive step (Williams & Van ’t Hof, 2014). Richard Hummelbrunner’s contribution to the IDS evaluation conference of 2013 seems to hold the key to the kind of understanding I had in mind. He managed to combine three systems concepts (interrelationships, perspectives, boundaries), three types of learning (single-, double-, and triple-loop learning) and three types of value (instrumental, intrinsic, critical) in one comprehensive exploratory framework. “The framework can be used in an exploratory manner to interrogate the coherence among the various components of an evaluation assignment.” The framework could then be used to see if the right types of learning or systems concepts are envisaged for the evaluation task at hand. A very useful suggestion, indeed.

Hummelbrunner - exploratory framework (adapted by Sjon van 't Hof)

Concept mapping the framework       Hummelbrunner’s original framework took the form of a cube for the three dimensions of systems concepts, learning types and types of values (IDS Bulletin, 46(1), p. 25). It was based on 4 previous figures. This blog is not called CSL4D (concept and systems learning for development) for nothing, so it seems right for me to do some ‘meaningful learning’ by making a single concept map of all 5 figures and see what comes out (see image above). Although the resulting concept map ought to be self-explanatory, I will explain it nevertheless: (1) (red arrows, read ‘upstream’, from left to right): In order to get an effect, there must first be a practical (i.e. implementable) intervention design based on an action strategy; such as strategy is defined on the basis of such norms, rules and assumptions as are accepted (typically) by the funding agent and its experts; these theories are embedded in paradigms or worldviews that enable, allow, dictate, prevent or block certain cognition or behaviour patterns. (2) Evaluation may look at the efficiency, effectiveness or ethics (sustainability etc.) of the intervention. Much evaluation looks mainly at the efficiency part, e.g. whether the money has been put to good use, or whether certain aspects of the implementation could not be (slightly or simply) improved. (3) There may also be a problem with the effectiveness of the intervention: e.g. the design may be based on the wrong assumptions. That is when we enter the domain of double-loop learning. Perhaps there was some overlooking of the perspectives of some of the key stakeholders? (4) There may also be something wrong with the sustainability of the intervention (e.g. after the programme’s end). Maybe we should examine where our theories come from, our worldviews. If so, a little adjustment won’t do, we need some form of transformation! If that is the only way to ensure sustainability, it would be a good thing if an evaluation could convince us that the new worldview will really help. (5) That´s where the boundaries (especially the so-called boundary critique and systemic designing) could come in handy. Yet, it is important to remember that systemic approaches by definition are unable to produce a ‘best’ option, just one out of multiple ‘better’ solutions, requiring our trained and capable judgment. (6) The interrelationships could be studied in a variety of ways (e.g. by using system dynamics), while perspectives could be accommodated by stakeholder analysis. If no triple-loop learning is needed, then those perspectives could be combined with the prevailing intervention theories (incl. norms and rules) to make some conceptual changes in the design. And (7) by way of last point : in critical heuristics or the critical systems approach it is not always useful to distinguish very clearly between effectiveness and ethics, since they are likely to be interwoven in specific and unexpected ways. There are linkages with ‘efficiency’, too. (I think I should mention here that this interpretation of things may differ considerably from what Richard Hummelbrunner intended).

Systemic learning in practice     As pointed out in the previous paragraph, the neat distinctions between single-, double-, and triple-learning of the framework cannot always be made in practice: a change at one level may unfold as a game changer at another level. One might argue that this follows from theory, but on the other hand there may be great deal of mind twisting and unexpectedness involved, making this ‘unfolding’ a very demanding task, indeed. Another point is that human problem situations tend to be complex and wicked and thisalmost by definition so. Our habits of thought do tend not to admit that, or if they do, will attempt not to live up to that notion. This is what Argyris and Schön (1996) observed: people may claim to favour a dialogical model that fosters double-loop learning, but in practice they fall back defensive, anti-systemic routines that inhibit double-loop learning (in Hummelbrunner, 2015). The conclusion is obvious: if many of the problems in international development are ‘wicked’ and if our tendency is to avoid double-loop learning – let alone triple-loop learning -, then it is best to assume that double- and triple-loop learning are necessary, so we must put in place mechanisms that ensure that it happens. In a way that’s the point of Hummelbrunner’s exploratory framework: to provide a tool that can be used for checking evaluation mandates and designs when double- or triple-loop learning seem indicated.

Adoption of systems approach is wicked problem     A recent study commissioned by the Department for International Development (DFID) examines ways to broaden the range of designs and methods for impact evaluations (Stern et al., 2012). The report Hummelbrunner - exploratory framework (adapted by Sjon van 't Hof) 2discusses complexity in its chapter 5, but this seems to be mostly restricted to the complexity of ‘complex programmes’, not the complexity of intervention design and inquiry in a complex development environment. In reference to the DFID report, Hummelbrunner adds that “a wider range of options could also broaden the scope for the types of learning, values and systems concepts applied in impact evaluations, which in turn could stimulate more thorough reflections on their correspondence with design approaches.” That may be true, but what is missing in the DFID report is the acknowledgment that there exists a generic systems approach (or possibly another preferred systemic learning option) that is able to deal with widespread complex, ‘wicked’ problems logically and systematically, i.e. by encouraging single-, double, and triple-loop learning. The question then becomes: how can we make decision-makers in international development and other sectors aware of the profound usefulness of systems thinking. Why do development institutions fail to muster the same enthusiasm as Van Gigch (1985) when he reviewed Werner Ulrich’s ‘Critical heuristics’ – which is by and large based on the systems approach of C. West Churchman (1968, 1971, 1979) – and exclaimed that it ” is an important attempt to formulate the [philosophical] foundations [using Popper and Habermas]… of practical science for the planning of social systems. As we all know, such science is lacking ….” Ulrich’s aim was to provide the critical citizen-planner with a ‘killer’ tool to take on the top-down experts from the bureaucracies and sciences that tend to dominate social planning narratives (including the international development narratives). It would seem that one of the domains where this type of narrative is still dominant is that of evaluation. There are but few signs that this is about to change. The general lack of interest in critical systems theory is hard to explain, considering that nobody seems to contest its 5E logic and necessity. Apparently the old, business-as-usual ways of conducting evaluation have great, yet unjustified ‘survival value’ and a more critical, systemic approach has low ‘adoption value.’ Outcome mapping is now offered as the evaluation approach of choice, whereas it seems to be lacking in innovative power (double- and triple-loop learning), especially when compared with the systems approach (Van ’t Hof, 2014). Heraclitus knew it 2500 years ago: “If you do not expect the unexpected, you will not find it.” I would like to add: “… and you probably won’t even miss it. ;-)”

Here is an idea. Forget Outcome Mapping, stick to the much more convenient and bureaucratically favoured LogFrame, but complement it with the Systems Approach as an evaluative learning and systemic design tool. And do so at “odd” moments: before, during and after LogFrame. And invest in making sure that the Systems Approach practitioner has sufficient and wide-ranging experience!

Understanding systemic intervention design & evaluation (SIDE)     There are a few reasons why the systems approach is not adopted: 1. ignorance and lack of experience; 2. no guarantee of a practical results; and 3. loss of control. To start with the last one: the loss of control is already manifest in the lack of effectiveness and sustainability. In fact, a loss of control is needed to allow the necessary learning. As to number 2.: there is no guaranteed outcome of the systems approach, not only because much depends on the willing collaboration and insights of the people participating (including beneficiaries and decision-makers), but also because it is by definition innovative. To turn innovative ideas into practical ones under conditions of uncertainty will always be tricky (but trickiness is better than almost certain failure, as long as we know what we are doing). The last point concerns mostly ignorance and insufficient willingness to learn what SIDE could contribute. Key people must know what is involved in the systems approach: 1. the nature and ubiquity of wicked problems; and 2. the particular ways needed to address wicked problems. Almost everybody has no clear idea of either, yet they have been explained a long time ago by people such as Rittel (see It’s a wicked problem, stupid!, Managing wicked problems: a primer and Resolving wicked problems: rules of the game in this blog. I could also add my post on serendipity). Ultimately, what it is all about is the 5 E’s (efficacy, efficiency, effectiveness, ethics, and elegance). An Hummelbrunner - exploratory framework (adapted by Sjon van 't Hof) 3intervention is efficacious if it works (at a basic, short-term level): that’s the easy part. It must also be efficient (we don’t want to waste resources): that’s already a bit more difficult. Furthernmore, a design must be effective in the long run, self-sustaining and so on: here it becomes complex and very important, yet most forms of evaluation try not to bother with it. Next there are the ethical aspects (e.g. environmental sustainability etc.) to top up the issue of effectiveness: is our intervention not harmful in a way, directly or indirectly? Finally there is the elegance of it all. Considering the context of the intervention: have we really made a neat design or is it more of an overweight, grumpy white elephant that destroys rather than snuggling in or linking up neatly with the environment. Curiously, there are many inter-relationships between these 5 E’s. With experience one will learn that it is hard to achieve effectiveness without looking into the ethics and elegance of it all. Reynolds provides some very good reflections on these issues in his paper ‘(Breaking) The Iron Triangle of Evaluation‘.

Let me end with a little verse by John P. van Gigch:

The three levels of inquiring systems
To create an artifact,
the designer needs to be
– a scientist to model reality
– an epistemologist to metamodel the design process, and
– an artist to contemplate the result.

This last point (about the ‘artist’) goes well with the things I wrote on the design way.



About csl4d
This entry was posted in General. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s