Gaps in Evaluation methods in development, and what innovations might address these gaps
On March 21st 2017, the Centre for Evaluation invited three speakers who are leaders in the fields of social sciences, philosophy of science, and political science to give their perspectives on what they thought the gaps in evaluation methods are.
Audrey Prost works at UCL’s Institute for Global Health and in close collaboration with the Indian civil society organisation Ekjut. Together they have conducted two large cluster randomised controlled trials of participatory women’s groups to improve maternal and newborn health in largely tribal areas of Jharkhand and Odisha, eastern India. The Ekjut trial, which was published in the Lancet in 2010, won the Society for Clinical Trial’s Award of ‘Trial of the Year’ in March 2011. Audrey, Ekjut, and the Public Health Foundation of India now collaborate on CARING, an MRC/Wellcome Trust/DFID-funded trial testing a community intervention with participatory women’s groups and home visits to improve the growth of children under two in Jharkhand and Odisha. Audrey is also an associate editor of Trials, and interested in the design and analysis of trials of complex social interventions.
Her talk was titled “Using mixed methods to evaluate the effects of scaling up participatory women’s groups for maternal and newborn health in rural India”
This talk described the use of mixed and multiple methods to evaluate the effects of participatory women’s groups to improve maternal and newborn health across the State of Jharkhand, in eastern India. She used this case study to outline key challenges faced when mixing methods, and discussed the practical and epistemological implications of solutions suggested by the MRC guidance for process evaluation, Program Impact Pathways adepts, and approaches to mixed methods synthesis.
Nancy Cartwright is Professor of Philosophy at the Department of Philosophy, University of Durham and at the University of California, San Diego (UCSD). Her research interests include philosophy and history of science (especially physics and economics), causal inference, causal powers, scientific emergence and objectivity and evidence, especially for evidence-based policy [EBP]. Her current work, for the project ‘Knowledge for Use’ [K4U], investigates how to use scientific research results for better policies. She is a member of the UK voluntary research network Policy Insight, which ‘aims to develop a new methodology for policy formulation, deliberation, evaluation and choice.’ She has worked with others on projects in this area on education, child protection and international development.
Her talk was titled “Intervention-centring, Context-centring: Two approaches, two sets of method gaps”
The intervention-centred approach to predicting policy effectiveness pictures successful policies as having something like an internal capability to produce the outcome if only circumstances are right. The big question then is, ‘When are circumstances right?’ Hence the current emphasis on ‘What works when and where?’ The problem is that we don’t know how to answer this question. For instance subgroup analysis can’t do the job. That’s not because it is open to data mining nor because it cannot establish causality; rather, it is looking to identify moderator variables for the theory of change, whereas what’s needed is to identify what systems will afford the intervention/outcome process envisaged in the theory of change. The context-centred approach essentially requires the building of a local causal model. Although there may be much discipline-specific expertise at this, there is little general guidance for how to build such models, for how to categorise the variety of different kinds of evidence that can bear on them, or for how to put the evidence together to arrive at an overall judgment.
Macartan Humphreys works on the political economy of development and formal political theory. Ongoing research focuses on post-conflict development, ethnic politics, political authority and leadership, and democratic development with a current focus on the use of field experiments to study democratic decision-making in post-conflict and developing areas. He has worked in Chad, Ghana, Haiti, Indonesia, Liberia, Mali, Sao Tome and Principe, Sierra Leone, Senegal, Uganda, and elsewhere. He is a former Trudeau fellow and scholar of the Harvard Academy, a research fellow at the WZB Berlin, the Executive Director of the Experiments in Governance and Politics research network and a Professor of Political Science at Columbia University.
His talk was titled “Mixing Methods: a Bayesian Approach “
He presented a new approach to multi-method research that generates learning from quantitative and qualitative evidence. The method uses a Bayesian framework to allow causal inferences from combinations of correlational and process-level observations. In addition to posterior estimates of causal effects, the framework updates confidence in the assumptions underlying the model. The approach was illustrated with applications to substantive issues in political science and he demonstrated how the framework can yield guidance on multi-method research design.