CEDIL & Centre for Evaluation Lecture Series

The Centre of Excellence for Development Impact and Learning (CEDIL) and the Centre for Evaluation are convening a lecture series addressing methods and innovation in primary studies.

Lecture 5 (13.00 – 14.00, 31 October 2018,  Jerry Morris B, Tavistock Place): Uncertainty and its consequences in social policy evaluation and evidence-based decision making, Matthew Jukes ( Fellow and Senior Education Evaluation Specialist at RTI International) and & Anne Buffardi (ODI)

The methodologies of RCTs and systematic reviews imply a high standard for the level of rigour in evidence-based decision making. When these standards are not met, how should decision-makers act? When a clear body of evidence is not available there is a risk that action is delayed while further research is conducted or that action is taken without optimal use of the evidence that does exist. In fact, all evidence-based decisions involve a degree of uncertainty. The question we address in this paper is: What level of certainty is required for which kinds of decisions? Scientific skepticism demands a high degree of certainty for sure and steady advances in knowledge. Medical interventions with a risk of death require a high degree of certainty. But what about decisions in social policy? We argue that decisions should be made based on a consideration of both the uncertainty and consequences of all possible outcomes. Put simply, if severe negative consequences can be ruled out, we can tolerate greater uncertainty in positive outcomes. We present a framework for making decisions on partial evidence. The framework has implications for the generation of evidence too. Social policy evaluations should systematically consider potential negative outcomes. Sources of uncertainty – including assumptions, methods, generalizability of findings as well as statistical uncertainty – should be analyzed, quantified where possible and reported. Investment should be made in reducing uncertainty in outcomes with the biggest consequences. Uncertainty can be managed by placing small bets to achieve large goals. Overall, more systematic analysis of uncertainty and its consequences can improve approaches to decision-making and to the generation of evidence.

Matthew Jukes has two decades of academic and professional experience in evaluating education projects, particularly in early-grade literacy interventions and the promotion of learning through better health. His research addresses culturally relevant approaches to assessment of social and emotional competencies in Tanzania; improving pedagogy through an understanding of the cultural basis of teacher-child interactions; frameworks to improve evidence-based decision-making; and methods to set reading proficiency benchmarks. He is contributing to projects in Malawi and Tanzania aimed at improving the quality of pre-primary and primary education in those countries.

Upcoming lectures:

Lecture 6 (13.00 – 14.00, 28 November 2018,  Jerry Morris B, Tavistock Place): Contextualised structural modelling for policy impact, Orazio Attanasio

Lecture 7 (12 December 2018): Multidimensional indices for poverty measurement, Edoardo Masset

Lecture 8 (23 January 2019): Stakeholder Engagement for Development Impact Evaluation and Evidence Synthesis, Sandy Oliver

Lecture 9 (6 February 2019): Evidence Standards and justifiable evidence claims, David Gough

Lecture 10 (6 March 2019): Joanna Busza

Lecture 11 (24 April 2019): The need for using theory to consider the transferability of interventions, Chris Bonell

Lecture 12 (5 June 2019): Evidence for Action in New Settings: The importance of middle-level theory, Nancy Cartwright 


Previous lectures in this series are available to watch online:

Lecture 1: The Four Waves of the Evidence Revolution: Progress and Challenges in Evidence-Based Policy and Practice, Howard White (research director of CEDIL and Chief Executive Officer of the Campbell Collaboration)

The evidence movement has rolled out in four waves since the 1990s: the results agenda, the rise of RCTs, systematic reviews, and developing an evidence architecture. This revolution is uneven across sectors and countries and is an unfinished revolution. Drawing on experiences from around the world, this talk will provide a historical overview of the evidence movement and the challenges it faces. Response to these challenges will be considered, including those offered by the work of CEDIL. Watch the lecture online.

Lecture 2: Representing Theories of Change Technical Challenges and Evaluation Consequences, Rick Davies (independent Monitoring and Evaluation consultant [MandE NEWS], based in Cambridge, UK )

This lecture summarised the main points of a CEDIL inception paper of the same name. That paper looks at the technical issues associated with the representation of Theories of Change and the implications of design choices for the evaluability of those theories. The focus is on the description of connections between events, rather the events themselves, because this is seen as a widespread design weakness. Using examples and evidence from a range of Internet sources six structural problems are described, along with their consequences for evaluation. The paper then outlines six different ways of addressing these problems, which could be used by programme designers and by evaluators. These solutions range from simple to follow advice on designing more adequate diagrams, to the use of specialist software for the manipulation of much more complex static and dynamic network models. The paper concludes with some caution, speculating on why the design problems are so endemic but also pointing a way forward. Three strands of work are identified that CEDIL and DFID could invest in to develop solutions identified in the paper. Watch the lecture online.

Lecture 3: Development Impact Attribution: Mental Models and Methods in ‘Mixed Marriage’ Evaluations, James Copestake (Professor of international development at the University of Bath)

Using the marriage metaphor to explore collaboration that spans academic traditions and disciplines, researchers and managers, public and private sector agencies. Mental models are used to explore the ontological, epistemological, contractual and socio-political tensions created by formalised evaluative practice. The lecture focus particularly on experience with mixing qualitative impact evaluation with other approaches to generating evidence, learning and legitimising public action. It draws on case studies from the garment industry, medical training, housing microfinance and agriculture spanning three continents. Watch the lecture online.

Lecture 4: Using Mid-level Theory to Understand Behaviour Change. Examples from Health and Evidence-based Policy, Howard White

Mid-level (or mid-range) theory rests between a project-level theory of change and grand theory. The specification and testing of mid-level theories help support the generalisability and transferability of study findings. For example, in economics, the operation of the price mechanism to balance supply and demand is a grand theory. An agricultural fertilizer subsidy programme would have a project-level theory which partly draws on the theory of supply and demand: lowering price increases demand). A mid-level theory could be developed related to the use of price subsidies, of which the fertilizer programme would be a specific application. This talk adopts the transtheoretical model of behaviour change to apply mid-level theory to the analysis of two sets of interventions: the adoption of health behaviour, and promoting evidence-based policy change. Watch the lecture online.

Back