Systemic evaluation design

Currently Bob Williams is preparing the second edition of his workbook on ‘Using systems concepts in evaluation design’ (available as a pdf for only 5$ from https://gumroad.com/l/evaldesign). It describes a practical systems approach to evaluation design. As Churchman explains in chapter 1 of “The systems approach” (available here), his dialectical systems approach was designed to first of all think about the function of systems, human systems such as organizations, policies and projects in particular, to reflect on their “overall objective and then to begin to describe the system in terms of this overall objective.” You may not be aware of it, but this is as revolutionary an idea today as it was more than half a century ago. It applies as much to systemic design as to systemic evaluation. The main take-away is that one cannot decide on an evaluation method without first looking into half a dozen systemic considerations. So buy that book. (This post has been syndicated by The Systems Community of Inquiry to https://stream.syscoi.com, the global network of systems thinkers, scientists and practitioners)

The key question         … (when we talk about evaluation) is: what exactly does it seek to achieve? In the case of evaluation this question must be asked twice: about the intervention (the so-called ‘evaluand’) and about the evaluation itself. Bob is one of the first to come up with a method to answer both questions systemically. I will try to describe this method as succinctly as possible by using a concept map, which is the bunch of spaghetti you see below. In the future, whenever you will eat spaghetti again, you will think back to this post. It’s not difficult.

The purpose      … of any intervention is to maximize value (to a client or beneficiary or customer) in terms of merit (intrinsic value), worth (relative value) or significance (meaningfulness, see also here), so evaluation is the attempt to assess how well the intervention is doing this. All three forms of value are important, but worth is particularly useful, because it expresses the notion of constraints or cost. This notion is considered again when we talk about the evaluation criteria, below. The emphasis on purpose is what forces us to consider systems thinking as the best way to go. Purpose is what makes us humans tick, even though we must figure out what the purpose of our actions is. But once we think we know what it is, there are many possible arrangements (or systems) for realizing it. Or we may decide to reconsider its importance and completely ignore it. Making arrangements requires a planner. Deciding what to do requires a decision-maker. So when we have a purpose, we have three roles: client, decision-maker and planner.

The evaluation client        Sometimes the three roles coincide in a single individual. Normally, when we talk of societal or organizational complexity they are highly differentiated. In systems thinking they are mixed up: a client may also be a planner (that’s when we speak of participation), or a decision-maker may be a client (she often benefits, i.e. enjoys some sort of value or quality ). This idea of roles, one of Churchman’s main contributions, can be applied to systemic evaluation. Conventionally, the evaluation client is the decision-maker of both the intervention and the evaluation. In systemic evaluation the evaluation client is the intervention as a whole, including the client and the planner.

Systems concepts         In October 2005, a group of evaluators (see contributors to Bob Williams’ ‘Systems concepts in evaluation’) convened at Berkeley University (which is exactly where C. West Churchman had done most of his work from the 1960s onward, in his case on the 6th floor of Barrows Hall) to figure out a way to explain systems thinking to uninitiated evaluators and decision-makers in simple terms without sacrificing its core principles and effectiveness. After two days in the pressure cooker they came out with the core concepts of inter-relationships, perspectives, and boundaries. What was new was not the concepts, but the idea that these three concepts are sufficient and necessary to explain systems thinking. I have played around with the concepts for a few years and my conclusion is that no explanation of effective systems thinking is complete without referring to all three of them (see e.g. here, here or here), leaving semantics aside.

Bob’s genius      … lies in the direct operationalization of these three concepts. People can actually use them without fully understanding how and why it works. Bob’s last two books were on systemic intervention design (Wicked solutions, co-written with me) and its complement, on systemic evaluation. By working the books (they are workbooks) one gets a direct understanding of the importance and application of the main principles. The main steps in the evaluation book follow the three basic concepts (2) exactly in that order: inter-relationships (to be mapped, 3), perspectives (to be framed, 4), and boundaries (to be critiqued: 5, 6 and 7).

Scope and focus   A basic systems principle is that of non-separability, which means that it is always a good idea to look at the larger picture, especially in complex problem situations. These are much more common than we sometimes admit. Scope and focus are boundary issues that can be addressed once we have framed the problem situation, i.e. broadly demarcated the problem and/or solution space. Scope (5a) is all that needs to be considered, focus (5a) is where we think most attention should go. The purpose (5b) of the evaluation must be broadly determined early on. Typical purpose categories are demonstration, improvement, and learning. For a purpose to be achieved it is necessary to prepare for the consequences (5c) of evaluation in terms of politics, ethics and/or practice (5d).

Evaluation, narrowly speaking        …. requires the collection of data using a method or methodology to see how an intervention performs. Standards must be set to compare the data with, but standards for what? Standards are the concrete expressions of more abstract criteria (6b) for assessing interventions. In Bob’s scheme these criteria are selected (6a) from eight categories, which are roughly derived from Werner Ulrich’s twelve critical systems categories (see the end of this post), which in turn are derived from Churchman’s twelve dialectical systems categories (see here). The questions to be answered are e.g. What standards could we apply to be sure that the intervention makes efficient use of the resources and doesn’t overlook certain environmental constraints (environmental in the sense circumstances), and so on and so forth.

Feasibility      … (7a) is mostly about the interrelated aspects of the allocation of sufficient resources and the selection of an adequate methodology for collecting and processing the necessary data (7b). Resources is a broad term and includes evaluators. There are more inter-relationships in the whole process. If we do not know what standards to apply, we cannot decide on the selection of a methodology. One must also consider the scope, focus, purpose and consequences. An important question is how to engage different perspectives in the whole process, since stakeholders playing the client and the planner roles in the interventions are also the clients of the evaluation. Sensible solutions will need to be devised in order to address these systemic issues. If not, the usefulness of evaluation will be very limited.

The evaluative mindset      … is perhaps best explained in this July 2018 video of Robin Miller (during the Out With It pre-meeting at the Royal Tropical Institute in Amsterdam)., who had the original idea for the Berkeley evaluation meeting (Williams 2007). Miller favours good qualitative studies over poor or premature quantitative, experimental ones. That fits very well with the systems approach advocated in this post. Miller lists eight reasons for evaluating: (a) to learn (including about undesired, unintended consequences); (b) to surface assumptions by multi-perspectival teams (e.g. about why we think certain interventions are good); (c) to help establish a compelling base of evidence for future interventions and policies that are actually implemented; (d) to use interventions to reflect what one values in them and what are one’s own values (significance); (e) to contribute to certain outcomes to occur (contribution rather than cause-effect attribution); (f) to discover and document needs; (g) to counter historical distortions in the base of evidence; and (h) to create an equal playing field in terms of the base of evidence when we talk about what is a meaningful, scalable, feasible intervention or intervention and one that responds to community needs, values and concerns.

Advertisements

About csl4d

https://www.linkedin.com/in/sjonvanthof/
This entry was posted in General. Bookmark the permalink.

1 Response to Systemic evaluation design

  1. antlerboy - Benjamin P Taylor says:

    Reblogged this on Systems Community of Inquiry.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s