CEDIL & Centre for Evaluation Lecture Series

The Centre of Excellence for Development Impact and Learning (CEDIL) and the Centre for Evaluation convene a lecture series addressing methods and innovation in primary studies. You can watch most of the lectures back online (details below) and access the outputs from CEDIL on its website.


Upcoming lectures

4th December | How the Global Innovation Fund uses impact forecasts to guide investment decisions | Ken Chomitz (Global Innovation Fund)

11th December | Using evidence in humanitarian decision-making | Sheree Bennett (IRC)

22nd January | Meta-ethnography to build middle-range theories: an exploration in three case studies | Audrey Prost (UCL)

19th February | Issues in the design and practice of randomised trials in implementation Science: the case of universal testing and treatment for HIV/AIDS | James Hargreaves (LSHTM)

4th March | Measuring the ‘hard to measure’ in development: abstract, multi-dimensional concepts and processes | Anne Buffardi (ODI)

11th March | Trials and tribulations of collecting evidence on effectiveness in disability inclusive development | Hannah Kuper (LSHTM)

1st April | Using causal-process-tracing middle-level theory to improve local predictions and planning for more effective interventions | Nancy Cartwright (Uni of Durham and UC San Diego)

22nd April | Emerging Lessons from Africa on Evidence Use for Policy and Implementation | Ian Goldman (Witwatersrand)


Past lectures

Lecture 1: The Four Waves of the Evidence Revolution: Progress and Challenges in Evidence-Based Policy and Practice, Howard White (research director of CEDIL and Chief Executive Officer of the Campbell Collaboration)

The evidence movement has rolled out in four waves since the 1990s: the results agenda, the rise of RCTs, systematic reviews, and developing an evidence architecture. This revolution is uneven across sectors and countries and is an unfinished revolution. Drawing on experiences from around the world, this talk will provide a historical overview of the evidence movement and the challenges it faces. Response to these challenges will be considered, including those offered by the work of CEDIL. Watch the lecture online.

Lecture 2: Representing Theories of Change Technical Challenges and Evaluation Consequences, Rick Davies (independent Monitoring and Evaluation consultant [MandE NEWS], based in Cambridge, UK )

This lecture summarised the main points of a CEDIL inception paper of the same name. That paper looks at the technical issues associated with the representation of Theories of Change and the implications of design choices for the evaluability of those theories. The focus is on the description of connections between events, rather the events themselves, because this is seen as a widespread design weakness. Using examples and evidence from a range of Internet sources six structural problems are described, along with their consequences for evaluation. The paper then outlines six different ways of addressing these problems, which could be used by programme designers and by evaluators. These solutions range from simple to follow advice on designing more adequate diagrams, to the use of specialist software for the manipulation of much more complex static and dynamic network models. The paper concludes with some caution, speculating on why the design problems are so endemic but also pointing a way forward. Three strands of work are identified that CEDIL and DFID could invest in to develop solutions identified in the paper. Watch the lecture online.

Lecture 3: Development Impact Attribution: Mental Models and Methods in ‘Mixed Marriage’ Evaluations, James Copestake (Professor of international development at the University of Bath)

Using the marriage metaphor to explore collaboration that spans academic traditions and disciplines, researchers and managers, public and private sector agencies. Mental models are used to explore the ontological, epistemological, contractual and socio-political tensions created by formalised evaluative practice. The lecture focus particularly on experience with mixing qualitative impact evaluation with other approaches to generating evidence, learning and legitimising public action. It draws on case studies from the garment industry, medical training, housing microfinance and agriculture spanning three continents. Watch the lecture online.

Lecture 4: Using Mid-level Theory to Understand Behaviour Change. Examples from Health and Evidence-based Policy, Howard White

Mid-level (or mid-range) theory rests between a project-level theory of change and grand theory. The specification and testing of mid-level theories help support the generalisability and transferability of study findings. For example, in economics, the operation of the price mechanism to balance supply and demand is a grand theory. An agricultural fertilizer subsidy programme would have a project-level theory which partly draws on the theory of supply and demand: lowering price increases demand). A mid-level theory could be developed related to the use of price subsidies, of which the fertilizer programme would be a specific application. This talk adopts the transtheoretical model of behaviour change to apply mid-level theory to the analysis of two sets of interventions: the adoption of health behaviour, and promoting evidence-based policy change. Watch the lecture online.

Lecture 5: Uncertainty and its consequences in social policy evaluation and evidence-based decision making, Matthew Jukes ( Fellow and Senior Education Evaluation Specialist at RTI International) and Anne Buffardi (ODI)

The methodologies of RCTs and systematic reviews imply a high standard for the level of rigour in evidence-based decision making. When these standards are not met, how should decision-makers act? When a clear body of evidence is not available there is a risk that action is delayed while further research is conducted or that action is taken without optimal use of the evidence that does exist. In fact, all evidence-based decisions involve a degree of uncertainty. The question we address in this paper is: What level of certainty is required for which kinds of decisions? Scientific skepticism demands a high degree of certainty for sure and steady advances in knowledge. Medical interventions with a risk of death require a high degree of certainty. But what about decisions in social policy? We argue that decisions should be made based on a consideration of both the uncertainty and consequences of all possible outcomes. Put simply, if severe negative consequences can be ruled out, we can tolerate greater uncertainty in positive outcomes. We present a framework for making decisions on partial evidence. The framework has implications for the generation of evidence too. Social policy evaluations should systematically consider potential negative outcomes. Sources of uncertainty – including assumptions, methods, generalizability of findings as well as statistical uncertainty – should be analyzed, quantified where possible and reported. Investment should be made in reducing uncertainty in outcomes with the biggest consequences. Uncertainty can be managed by placing small bets to achieve large goals. Overall, more systematic analysis of uncertainty and its consequences can improve approaches to decision-making and to the generation of evidence. Watch the lecture online.

Lecture 6: Contextualised structural modelling for policy impact, Orazio Attanasio (Research Director of IFS, a Director of the ESRC Centre for the Microeconomic Analysis of Public Policy (CPP) and co-director of the Centre for the Evaluation of Development Policies (EDePo))

Early Childhood Development (ECD) interventions have recently received much attention. The consensus is that ECD interventions work. The new challenges however are:  (i) to understand how interventions work and how they obtain the observed effects; and (ii) how to scale up effective interventions. The answers to the second question is related to the answer to the first. In this lecture Orazio presented some concrete examples of these issues. Watch the lecture online.

Lecture 7: To boldly go where no evaluator has gone before: the CEDIL evaluation agenda, Edoardo Masset (Deputy Director CEDIL)

Over its inception phase CEDIL identified key methodological evaluation challenges to address and priority thematic areas. CEDIL is now entering the next phase in which it will apply its innovative evaluation methods to development interventions. These will be funded and managed through the CEDIL programme directorate; CEDIL has announced forthcoming opportunities to apply for funding to contribute to its research agenda. This talk illustrated CEDIL’s ambitious evaluation agenda over the next 5 years. Watch the lecture online.

Lecture 8: Stakeholder Engagement for Development Impact Evaluation and Evidence Synthesis,  Sandy Oliver  (Professor of Public Policy at UCL Institute of Education)

This lecture explores methods for engaging stakeholders in making decisions for international aid and social development in the presence and absence of relevant research. It draws on empirical evidence about engaging stakeholders in the generation and use of evidence, taking into account political analysis, social psychology and systems thinking. It finds that the suitability of methods for engagement depends largely on the confidence that can be placed in knowledge about the specific context, and knowledge from elsewhere that seems theoretically or statistically transferable. When decisions are about generating new knowledge, the suitability of methods for engagement depends largely on whether the purpose is to generate knowledge for a specific context or for more generalisable use and, at the outset, the confidence and consensus underpinning the key concepts of interest. Watch the lecture online.

Lecture 9: Evidence Standards and justifiable evidence claims, David Gough (Professor of Evidence Informed Policy and Practice and the Director of the EPPI-Centre in the Social Science Research Unit at UCL)

In developing findings and conclusions from their studies, researchers are making ‘evidence claims’. We therefore need to consider what criteria are used to make and justify such claims. This presentation will consider the use of evidence standards to make evidence claims in relation to primary research, reviews of research (making statements about the nature of an evidence base), and guidance and recommendations informed by research. The aim is to go beyond testing the trustworthiness (quality appraisal ) of individual studies to discuss the ways in which evidence standards are used to make evidence claims to inform decisions in policy, practice, and personal decision making. Watch the lecture online.

Lecture 10: When context is the barrier: Evaluating programmes during political turmoil, Joanna Busza (Director of the Centre for Evaluation and Associate Professor in Sexual & Reproductive Health at LSHTM)

The importance of “context” in evaluation is increasingly recognised, especially when considering how complex interventions might be scaled-up, adapted or replicated for new settings. Most process evaluation frameworks include documenting contextual characteristics deemed relevant to the intervention’s implementation, such as the structure and function of health systems, cultural norms and practices, and existing laws or policies. The MRC guidelines for process evaluations, for example, suggest using existing theory to identify a priori the factors likely to facilitate or hinder successful implementation of intervention components. This seminar will focus on challenges to design, implementation and evaluation of community-based health programmes when context – at its broadest level – changes in abrupt, unpredictable ways. I will share examples from research in Cambodia, Ethiopia and Zimbabwe, including both negative and positive consequences of dramatic political or policy changes, and discuss implications for completing and interpreting the affected studies. Watch the lecture online.

Lecture 11: Using RCTs to evaluate social interventions: have we got it right? Charlotte Watts (Chief Scientific Adviser to the UK Department of International Development)

Randomised controlled trials (RCTs) provide the gold standard method to obtain evidence of intervention impact. Historically, the approach was developed to assess the impact of clinical interventions. Increasingly however, RCTs – both individual and cluster – are being used to assess a broad range of behavioural and social interventions. Although some argue that using randomised designs are not appropriate for evaluating social and community based interventions, we disagree. Whilst there may be challenges (as there often are with clinical interventions), randomisation and the use of control populations should always be considered, as this gives the most robust measure of effect size. But this doesn’t mean that we have everything right. Drawing upon examples from intervention research on HIV, as part of the STRIVE research programme, and on violence against women, as part of the LSHTM Gender, Violence and Health Centre, the presentation will discuss whether it is appropriate to apply all of the standards and ‘rules’, without consideration of the potential implications for the feasibility, forms and applicability of evidence generated. Watch the lecture online.

Lecture 12: The need for using theory to consider the transferability of interventions, Chris Bonell (Professor of Public Health Sociology at LSHTM)

This lecture explores ways that the transferability of interventions to new settings might be modelled statistically and the role of theory in considering the question of transfer. The lecture draws on preliminary results of the realist trial of the Learning Together whole-school health programme in the UK. Watch the lecture live online.

Lecture 13: Learning and Adapting in Development Practice, Patrick Ward (CEDIL Programme Director, Director of Oxford Policy Management’s Statistics, Evidence and Accountability programme)

The growth of the global evidence base has provided opportunities to accelerate development through the systematic sharing of evidence of ‘what works’. Context matters however, and the implementation of development programmes requires the ability to learn from and respond to successes and failures on a short time scale and in the face of limited data. This lecture will explore factors influencing the extent to which development programmes are able to adapt in the light of evidence and learning. It will draw from the practical experience of monitoring and evaluating development programmes and supporting government statistics across a range of sectors in developing countries. Watch the lecture online.

Lecture 14: Designing Evaluations to Inform Action in New Settings, Calum Davey (Deputy Director of the Centre for Evaluation and Assistant Professor at LSHTM)

This presentation was based on a CEDIL inception report. The report drew on the perspectives of more than five academic disciplines — from epidemiology to philosophy — and reviewed a diverse range of literature on the task of ‘learning for elsewhere’, addressing the questions: what is learned in evaluations of complex interventions that is useful for future decision making, and how can this be improved? Suggested answers all involved theory, begging questions about which setting theories apply, and how to know quickly. The notion of context-centred interventions challenged the sentiment that learning ‘what works?’ or even ‘how does it work?’ helps when in fact approaches to knowing ‘why is outcome occurring?’ would be more useful.  This lecture was not recorded.

Lecture 15: Evidence for Action in New Settings: The importance of middle-level theory, Nancy Cartwright Professor of Philosophy at Durham University and a Distinguished Professor at the University of California, San Diego (UCSD)

For predicting intervention outcomes in a new setting you need a context-local causal model of what is expected to happen there. A theory of change (ToC) for the intervention is a starting point. ToCs are ‘middle-level theories’: they aim for some, but not universal, general applicability. These are typically ‘arrows-and-variables’ models depicting what steps should occur in sequence but not the interactive factors necessary at each step, nor possible interrupters/defeaters.  For policy prediction these theories need context-local thickening. This requires an understanding of how each step produces the next, which in turn calls for middle-level theory of a different kind:  the middle-level principles (mechanisms) that govern that production. The context-local model allows better prediction of whether an intervention can work there, what it would take for it to do, what side effects might be and whether all this is affordable and acceptable in the context. Watch the lecture online.

Lecture 16: Making data reusable: lessons learned from replications of impact evaluations, Marie Gaarder (International Initiative for Impact Evaluation [3ie])

 In recent years, efforts to replicate the findings in scientific studies indicate that many results cannot be verified. In other words, reported findings cannot be reproduced using the original dataset and analysis code. The ‘replication crisis’ (as it has come to be known) appears to be a cross-disciplinary challenge. While this has led to a call for more replications, in practice there are few incentives for doing so. In the international development sector, there is an emphasis on developing interventions and policies that are grounded in rigorous evidence. Given the limited resoures available to tackle large scale challenges, it is imperative to ensure policymaking and programming draw upon lessons learned from evaluations of development interventions. But, given the replication crisis, how reliable is this evidence? In its role as a producer and synthesizer of evidence, the International Initiative for Impact Evaluation (3ie) funds impact evaluations of development interventions and policies in low- and middle-income countries. In 2018, we embarked on a project to replicate published evaluation results using the data and analysis code submitted by evaluation teams. The talk will present the findings from this effort and discuss lessons learned and possible recommendations for various actors, hopefully with active participation from the audience. Watch the lecture online and download the slides.

Lecture 17: Turning ‘evidence for development’ on its head, Ruth Stewart (Africa Centre for Evidence)

In an era of decoloniality, post-‘development’, and antipatriarchy, the evidence-based movement in the North is failing to move with the times and as a result is outdated and risks being ineffective. Living and working in the global South, I experience a world in which innovation in evidence-informed decision-making and its related methodologies are necessary, routine and inspirational, and yet they are largely ignored by the global North. Whilst resource-poor, and not well publicised, the evidence community across Africa is world-leading in a number of respcts. This lecture is a call to arms for all those who want to ensure that better evidence leads to better decisions and to better futures for those living in resource poor environments. It proposes a new lens through whcih to view ‘evidence for development’. It celebrates the successes of Southern evidence communities, achieved largely in spite of, and not because of, Northern good intentions. Watch the lecture live online.

Back