Elsevier

Evaluation and Program Planning

Volume 59, December 2016, Pages 62-73
Evaluation and Program Planning

Making sense of the emerging conversation in evaluation about systems thinking and complexity science

https://doi.org/10.1016/j.evalprogplan.2016.08.004Get rights and content

Highlights

  • Reviews recent literature addressing systems thinking and complexity science in evaluation.

  • Provides one interpretation of the implications systems thinking and complexity science pose for evaluation theory and practice.

  • Key implications include new ways to frame social interventions and their contexts and new considerations for selecting and using methods, valuing, producing and justifying knowledge, and facilitating evaluation use.

Abstract

In the last twenty years, a conversation has emerged in the evaluation field about the potential of systems thinking and complexity science (STCS) to transform the practice of evaluating social interventions. Documenting and interpreting this conversation are necessary to advance our understanding of the significance of using STCS in planning, implementing, and evaluating social interventions. Guided by a generic framework for evaluation practice, this paper reports on an inter-disciplinary literature review and argues that STCS raises some new ways of thinking about and carrying out the following six activities: 1) supporting social problem solving; 2) framing interventions and contexts; 3) selecting and using methods; 4) engaging in valuing; 5) producing and justifying knowledge; and 6) facilitating use. Following a discussion of these issues, future directions for research and practice are suggested.

Introduction

Throughout the development of the evaluation field, new trends gain traction with bold promises of transforming the practice of evaluating social interventions (i.e., policies, programs, practices). For example, over the last three decades, stakeholder approaches have influenced norms about stakeholder involvement in evaluations (Rodriguez-Campos, 2012) and, more recently, results-based management has increased demand for results-based monitoring and evaluation systems (e.g., Kusek & Rist, 2004). Systems thinking and complexity science (STCS) are among the latest new trends in evaluation (Reynolds, Forss, Hummelbrunner, Marra, & Perrin, 2012). Growing interest in these ideas is evident in publications including books and journals (Cabrera, Colosi, & Lobdell, 2008; Eoyang & Berkas, 1999; Forss, Marra, & Stern, 2011; Levin-Rozalis, 2014, Morell, 2010, Mowles, 2014, Patton, 2011, Williams and Imam, 2007, Williams and Hummelbrunner, 2011, Wolf-Branigin, 2013), conference themes of professional associations (Parsons, Keene, & Dhillon, 2014), and reports from agencies commissioning evaluations (e.g., Fujita, 2010, GIZ, 2011). Driving this interest are myriad ways in which evaluators and evaluation commissioners regard the potential of systems thinking and complexity science to transform how social interventions are evaluated.

Conversation about STCS in evaluation focuses on methods and methodologies as well as conceptual and theoretical issues. At least since the 1980s, scholars have been bringing systems and complexity methods into the evaluation field (Gregory and Jackson, 1992a, Gregory and Jackson, 1992b, Midgley, 1996, Ulrich, 1988) with the recent book Systems Concepts in Action: A Practitioner’s Toolkit (Williams & Hummelbrunner, 2011) as, perhaps, the latest attempt. Some of the methods being explored include causal loop diagrams and system dynamics (Dyehouse, Bennett, Harbor, & Childress, 2009; Fredericks, Deegan, & Carman, 2008); agent-based modeling (Morell, Hilscher, Magura, & Ford, 2010); soft systems methodology (Attenborough, 2007); social network analysis (Durland & Fredericks, 2005); and critical systems heuristics (Reynolds & Williams, 2011). Evaluators have developed new evaluation conceptual frameworks and guides for practice (e.g., Cabrera and Trochim, 2006, Cabrera et al., 2008, Hargreaves, 2010; Gopalkrishanan, Preskill, & Lu, 2013; Parsons, 2007, Preskill and Gopalkrishanan, 2014, Marra, 2011a, Marra, 2011b, Wasserman, 2010) based on STCS for evaluating complex (i.e., emergent processes and outcomes) and systems change (i.e., intended to modify social systems such as communities, schools, healthcare) interventions. New theoretical approaches to evaluation practice have been developed, for example Developmental Evaluation (Patton, 2011), Systemic Evaluation (Boyd et al., 2007), Systematization (Tapella & Rodriguez-Bilella, 2014), and several conventional approaches have been modified to incorporate STCS including Responsive Evaluation (Gregory, 1997) and Theory-based Evaluation (Callaghan, 2008, Davies, 2004, Hummelbrunner, 2010, Rogers, 2008, Stame, 2004).

Scholars in related fields are also examining the implications of STCS for transforming ways in which social interventions are designed, implemented, and evaluated. In public health, international aid and development, community psychology, and social services, scholars argue that STCS challenge and transform the ways these fields are conceptualized and practiced (in public health see Leischow and Milstein (2006), Leischow et al. (2008), Milstein (2008), Sterman (2006), and Trochim, Cabrera, Milstein, GAllagher, & Leischow, (2006) in international aid and development see Jones (2011), Ramalingam, Jones, Reba, and Young (2008); Ramalingam (2013); in community psychology see Foster-Fishman, Nowell, & Yang (2007) and Foster-Fishman and Watson (2012); in social services see Wolf-Branigin (2012)).

STCS ideas and methods are quickly becoming important additions to evaluation toolkits and practices. Evaluation commissioners and stakeholders, in the United States as well as internationally, have begun requesting systems- and complexity-informed evaluations, particularly to assess new kinds of social interventions, such as networks, emerging innovations, and systems change (e.g., Australian Public Service (APS), 2007, Byrne, 2013, Dolphin and Nash, 2012, Jones, 2011). There has been a considerable rise in the number of evaluators who claim to be using systems and complexity ideas and methods in evaluation practice (Patton, 2016), although the extent and ways in which evaluators are drawing on STCS in evaluations are not well understood. To date, there has been no systematic examination of what these ideas and methods contribute to the evaluation field or a framework for understanding when and why to use them. Walton (2014) has identified implications of complexity theory for evaluation design, and Mowles (2014) critically reviewed the turn to complexity science in evaluation. The only broad examinations of systems thinking in evaluation have focused on conceptualizing systems thinking (Cabrera et al., 2008) and exploring the use of systems concepts (e.g., interrelationships, perspectives, boundaries) and methodological approaches in specific evaluation cases (Williams & Imam, 2007).

Given the rather rapid and widespread interest in STCS in the evaluation field and the lack of a fairly comprehensive look at STCS and its value for evaluation, this paper systematically examines the literature on STCS and proposes a framework for beginning to understand some of the major implications of STCS for designing and conducting evaluations. Since conversation on STCS in evaluation is relatively new and remarkably diverse, and characterized by different STCS ideas and techniques and different evaluator perspectives and evaluation contexts, this paper does not aim to settle the matter of what STCS means for evaluation practice or offer prescriptive guidelines for evaluation practice. Rather, the paper modestly aspires to identify some of the insights, challenges, and considerations STCS poses that are garnering attention in evaluation and to present these in an admittedly abbreviated manner that can be understood by those unfamiliar with STCS. Readers are invited to challenge or expand on the implications identified and to draw out more practical implications for how particular social interventions should be planned, implemented, or evaluated.

This paper begins with a brief introduction to systems thinking and complexity science and defines how these terms are used here. Next, a framework of evaluation practice is proposed as a way to structure the way we might “read” and interpret what STCS has to offer evaluation. This is followed by discussion of the procedure used to conduct an inter-disciplinary review of literature addressing the implications of STCS for evaluating social interventions. The main body of the paper discusses six aspects of evaluation practice for which STCS raises some new ways of thinking and acting: 1) supporting social problem solving; 2) framing interventions and contexts; 3) selecting and using methods; 4) engaging in valuing; 5) producing and justifying knowledge; and 6) facilitating use. Each of these activities is discussed in terms of how it has traditionally been conceived and carried out in the evaluation field followed by how STCS complements, challenges, or raises considerations for this activity. The paper concludes with several directions for future research and conversation and for evaluation practice.

Section snippets

Defining STCS

The terms systems thinking and complexity science refer to two distinct traditions within the systems and complexity fields that are not easily (or legitimately) collapsed into one set of ideas and techniques. Scholars within the systems and complexity fields argue about which theorists, concepts, and methods fall within each of these traditions as well as which ideas and practices span both traditions. In this paper, the combined term, systems thinking and complexity science (STCS), refers to

Framework of evaluation practice

For the purposes of this paper, I developed a framework of evaluation practice within which the implications of STCS could be examined. The framework was derived from the ways in which Shadish, Cook, and Leviton (1991), Christie and Alkin (2013) and Alkin and Christie (2003) broadly characterized the field of evaluation. In Foundations of Program Evaluation: Theories of Practice, Shadish et al. (1991) outlined five components of evaluation theory −social programming, knowledge construction,

Making sense of the conversation about STCS

I first began following this conversation in 2012 during my doctoral studies at the University of Illinois. Immersed in coursework about evaluation theory and practice, I intently turned to this conversation with questions about how long-standing ideas and assumptions in the evaluation field were enhanced or challenged by STCS. Through independent studies, coursework in the Systems Thinking in Practice program at the Open University, participation in the International Society for Systems

Implications of STCS for evaluating social interventions

This section identifies implications of STCS for six activities involved in evaluation practice. Each sub-section begins with a brief overview of how evaluators typically think about and carry out that activity followed by discussion of new ideas and considerations STCS poses. Specific examples from the literature reviewed are provided.

Conclusion

The emerging conversation in evaluation about STCS raises implications in six key dimensions of theory and practice. First, those using STCS will need to surface and critically examine assumptions about social problem solving which inform the evaluation being conducted. To fully take up STCS ideas and techniques in evaluation practice, there may need to be a shift from assuming a linear, predict-act-evaluate approach to social problem solving to building capacity for a more iterative, adaptive

Acknowledgments

I thank Thomas Schwandt for his guidance and editorial comments on earlier drafts of this paper; session attendees at the 2015 International Society of Systems Sciences conference for their feedback on an abbreviated version of this paper; and four anonymous reviewers for their comments and suggestions. Funding to complete this research was provided by a Dissertation Completion Fellowship from the Graduate College at the University of Illinois at Urbana-Champaign.

References (107)

  • L.M. Benjamin et al.

    From program to network: the evaluator’s role in today’s public problem-solving environment

    American Journal of Evaluation

    (2009)
  • BetterEvaluation (2015). Develop initial description. Retrieved from:...
  • A. Boyd et al.

    Systemic evaluation: A participative, multi-method approach

    Journal of the Operational Research Society

    (2007)
  • D. Byrne et al.

    Useful complex causality

  • D. Byrne

    Evaluating complex social interventions in a complex world

    Evaluation

    (2013)
  • D. Cabrera et al.

    A protocol of systems evaluation

  • G. Callaghan

    Evaluation and negotiated order: Developing the application of complexity theory

    Evaluation

    (2008)
  • P. Checkland

    Soft systems methodology: a 30-year retrospective

    (1999)
  • C.A. Christie et al.

    An evaluation theory tree

  • J. vidson

    Actionable evaluation basics

    (2013)
  • R. Davies

    Scale, complexity and the representation of theories of change: Part II

    Evaluation

    (2004)
  • T. Dolphin et al.

    Complex new world: translating new economic thinking into public policy

    (2012)
  • M.M. Durland et al.

    An introduction to social network analysis

    New Directions for Evaluation

    (2005)
  • G.H. Eoyang et al.
  • G.H. Eoyang et al.

    Adaptive action: leveraging uncertainty in your organization

    (2013)
  • J.L. Fitzpatrick

    An introduction to context and its role in evaluation practice

  • K. Forss et al.

    Introduction

  • P.G. Foster-Fishman et al.

    The ABLe change framework: A conceptual and methodological tool for promoting systems change

    American Journal of Community Psychology

    (2012)
  • P.G. Foster-Fishman et al.

    Putting the system back into systems change: A framework for understanding and changing organizational and community systems

    American Journal of Community Psychology

    (2007)
  • K. Fredericks et al.

    Using system dynamics as an evaluation tool: Experience from a demonstration program

    American Journal of Evaluation

    (2008)
  • Systemic Approaches in Evaluation: Documentation of the Conference on 25–26 January 2011. Retrieved from:...
  • S. Gopalkrishanan et al.

    Next generation evaluation: embracing complexity, connectivity, and change

    (2013)
  • J.C. Greene

    Context

  • A. Gregory et al.

    Evaluating organizations: A systems and contingency approach

    Systems Practice

    (1992)
  • A. Gregory et al.

    Evaluation methodologies: A system for use

    Journal of the Operational Society

    (1992)
  • A. Gregory

    Evaluation practice and the tricky issue of coercive contexts

    Systems Practice

    (1997)
  • J.T. Grove
    (2015)
  • Hargreaves, M.B. (2010). Evaluating system change: a planning guide. Princeton, New Jersey, pp....
  • P. Hawe et al.

    Knowledge theories can inform evaluation practice: What can a complexity lens add?

    New Directions for Evaluation

    (2009)
  • G.T. Henry

    Choosing criteria to judge program success: A values inquiry

    Evaluation

    (2002)
  • P. Hoverstadt

    The viable systems model

  • R. Hummelbrunner et al.

    Systems thinking, learning and values in evaluation

    (2013)
  • R. Hummelbrunner

    Beyond logframe: critique, variations, and alternatives

  • R. Hummelbrunner

    Systems thinking and evaluation

    Evaluation

    (2011)
  • Jones, H. (2011). Taking responsibility for complexity: How implementation can achieve results in the face of complex...
  • J.Z. Kusek et al.

    Ten steps to a results-based monitoring and evaluation system

    The international bank for reconstruction and development

    (2004)
  • S. Leischow et al.

    Systems thinking and modeling for public health practice

    American Journal of Public Health

    (2006)
  • Cited by (65)

    • A problem-bound evaluation approach

      2023, Evaluation and Program Planning
      Citation Excerpt :

      It is important to recognize that we do not conceive of problems as static reflections of reality. Our problem-bound framework draws upon the work of Gates (2016), who asserts that many evaluators who use systems thinking and complexity science in their work consider social problems as “situations that are found to be problematic, undesirable, and requiring change” (p. 65) and difficult to define because doing so relies on “particular people in a particular place and time; problems are continuously changing and subject to differing perspectives such that any attempt to bound them is temporary” (p. 65). Thus, we recognize that framing our approach as “problem-bound” necessitates continually and critically reflecting on who is involved in the framing of the perceived problem, and what it means to improve it.

    • Beyond financial proxies in Cohesion Policy inputs’ monitoring: A system dynamics approach

      2021, Evaluation and Program Planning
      Citation Excerpt :

      In respect of monitoring and evaluation, there are no evident theoretical obstacles to such integration and it could potentially occur as it happened in other domains (e.g., sustainability indicators, Meadows, 1998). In general, this study can provide an additional example of the benefits of using principles of systems’ theories for evaluation and monitoring purposes, further nurturing the discussion on this integration (Gates, 2016, 2017; Hassmiller Lich et al., 2017; Hummelbrunner, 2011; Midgley et al., 2008). In addition, the new possible approach about payments proposed by the Commission, in which financing is not linked to financial absorption but to the achievement of tangible results of the LMAs (European Commission, 2019) shifting away from the money-oriented compliance (Polverari, 2015), could be a fertile political ground to embrace new monitoring frameworks.

    • The organizational systems thinking excellence model (OSTEM)

      2023, Systems Research and Behavioral Science
    View all citing articles on Scopus
    View full text