CEDIL & Centre for Evaluation Lecture Series


The Centre of Excellence for Development Impact and Learning (CEDIL) and the Centre for Evaluation are convening a lecture series addressing methods and innovation in primary studies.

Lecture 7: To boldly go where no evaluator has gone before: the CEDIL evaluation agenda

Date: 12 December 2018

Time: 12.45 – 14.00

Location: John Snow, Keppel Street

Speaker: Edoardo Masset

In this lecture Edoardo will introduce the newly established Centre of Excellence on Development Impact and Learning (CEDIL). CEDIL was established by the UK Department for International Development to develop new evaluation methods and to commission evaluation and synthesis studies in neglected areas of international development. Over its inception phase CEDIL identified key methodological evaluation challenges to address and priority thematic areas. The talk will illustrate CEDIL’s ambitious evaluation agenda over the next 5 years, and will be followed by Q&A and discussion. Watch live online and submit questions during the talk to cedil@lshtm.ac.uk

Upcoming lectures:

Lecture 8 (23 January 2019): Stakeholder Engagement for Development Impact Evaluation and Evidence Synthesis, Sandy Oliver

Lecture 9 (6 February 2019): Evidence Standards and justifiable evidence claims, David Gough

Lecture 10 (6 March 2019): Title to be confirmed Joanna Busza

Lecture 11 (27 March 2019): Using RCTs to evaluate social interventions: have we got it right? Charlotte Watts

Lecture 12 (24 April 2019): The need for using theory to consider the transferability of interventions, Chris Bonell

Lecture 13 (22 May 2019): Title to be confirmed, Calum Davey

Lecture 14 (5 June 2019): Evidence for Action in New Settings: The importance of middle-level theory, Nancy Cartwright 

Previous lectures in this series are available to watch online:

Lecture 1: The Four Waves of the Evidence Revolution: Progress and Challenges in Evidence-Based Policy and Practice, Howard White (research director of CEDIL and Chief Executive Officer of the Campbell Collaboration)

The evidence movement has rolled out in four waves since the 1990s: the results agenda, the rise of RCTs, systematic reviews, and developing an evidence architecture. This revolution is uneven across sectors and countries and is an unfinished revolution. Drawing on experiences from around the world, this talk will provide a historical overview of the evidence movement and the challenges it faces. Response to these challenges will be considered, including those offered by the work of CEDIL. Watch the lecture online.

Lecture 2: Representing Theories of Change Technical Challenges and Evaluation Consequences, Rick Davies (independent Monitoring and Evaluation consultant [MandE NEWS], based in Cambridge, UK )

This lecture summarised the main points of a CEDIL inception paper of the same name. That paper looks at the technical issues associated with the representation of Theories of Change and the implications of design choices for the evaluability of those theories. The focus is on the description of connections between events, rather the events themselves, because this is seen as a widespread design weakness. Using examples and evidence from a range of Internet sources six structural problems are described, along with their consequences for evaluation. The paper then outlines six different ways of addressing these problems, which could be used by programme designers and by evaluators. These solutions range from simple to follow advice on designing more adequate diagrams, to the use of specialist software for the manipulation of much more complex static and dynamic network models. The paper concludes with some caution, speculating on why the design problems are so endemic but also pointing a way forward. Three strands of work are identified that CEDIL and DFID could invest in to develop solutions identified in the paper. Watch the lecture online.

Lecture 3: Development Impact Attribution: Mental Models and Methods in ‘Mixed Marriage’ Evaluations, James Copestake (Professor of international development at the University of Bath)

Using the marriage metaphor to explore collaboration that spans academic traditions and disciplines, researchers and managers, public and private sector agencies. Mental models are used to explore the ontological, epistemological, contractual and socio-political tensions created by formalised evaluative practice. The lecture focus particularly on experience with mixing qualitative impact evaluation with other approaches to generating evidence, learning and legitimising public action. It draws on case studies from the garment industry, medical training, housing microfinance and agriculture spanning three continents. Watch the lecture online.

Lecture 4: Using Mid-level Theory to Understand Behaviour Change. Examples from Health and Evidence-based Policy, Howard White

Mid-level (or mid-range) theory rests between a project-level theory of change and grand theory. The specification and testing of mid-level theories help support the generalisability and transferability of study findings. For example, in economics, the operation of the price mechanism to balance supply and demand is a grand theory. An agricultural fertilizer subsidy programme would have a project-level theory which partly draws on the theory of supply and demand: lowering price increases demand). A mid-level theory could be developed related to the use of price subsidies, of which the fertilizer programme would be a specific application. This talk adopts the transtheoretical model of behaviour change to apply mid-level theory to the analysis of two sets of interventions: the adoption of health behaviour, and promoting evidence-based policy change. Watch the lecture online.

Lecture 5: Uncertainty and its consequences in social policy evaluation and evidence-based decision making, Matthew Jukes ( Fellow and Senior Education Evaluation Specialist at RTI International) and & Anne Buffardi (ODI)

The methodologies of RCTs and systematic reviews imply a high standard for the level of rigour in evidence-based decision making. When these standards are not met, how should decision-makers act? When a clear body of evidence is not available there is a risk that action is delayed while further research is conducted or that action is taken without optimal use of the evidence that does exist. In fact, all evidence-based decisions involve a degree of uncertainty. The question we address in this paper is: What level of certainty is required for which kinds of decisions? Scientific skepticism demands a high degree of certainty for sure and steady advances in knowledge. Medical interventions with a risk of death require a high degree of certainty. But what about decisions in social policy? We argue that decisions should be made based on a consideration of both the uncertainty and consequences of all possible outcomes. Put simply, if severe negative consequences can be ruled out, we can tolerate greater uncertainty in positive outcomes. We present a framework for making decisions on partial evidence. The framework has implications for the generation of evidence too. Social policy evaluations should systematically consider potential negative outcomes. Sources of uncertainty – including assumptions, methods, generalizability of findings as well as statistical uncertainty – should be analyzed, quantified where possible and reported. Investment should be made in reducing uncertainty in outcomes with the biggest consequences. Uncertainty can be managed by placing small bets to achieve large goals. Overall, more systematic analysis of uncertainty and its consequences can improve approaches to decision-making and to the generation of evidence. Watch the lecture online.

Lecture 6: Contextualised structural modelling for policy impact, Orazio Attanasio (Research Director of IFS, a Director of the ESRC Centre for the Microeconomic Analysis of Public Policy (CPP) and co-director of the Centre for the Evaluation of Development Policies (EDePo))

Early Childhood Development (ECD) interventions have recently received much attention. The consensus is that ECD interventions work. The new challenges however are:  (i) to understand how interventions work and how they obtain the observed effects; and (ii) how to scale up effective interventions. The answers to the second question is related to the answer to the first. In this lecture Orazio presented some concrete examples of these issues. Watch the lecture online.