Counterfactual: The Centre for Evaluation Blog
Last month, the British Council Research Links programme and the EC SDH-net consortium hosted a workshop entitled ‘Towards comprehensive evaluation for health and development: promoting the integration of evaluation methods.’ In our latest blog, one participant reflects on a panel debate on the differences between economists and epidemiologists in their approach to evaluation.
By Olga Biosca
The day starts with a panel discussion on different approaches to outcome evaluation. Two health economists (Juan Pablo Gutierrez from the Institution Nacional de Salud Publica, Mexico; Timothy Powell-Jackson from the London School of Hygiene & Tropical Medicine) and two epidemiologists (Calum Davey and James Hargreaves
both from the London School of Hygiene & Tropical Medicine) are on the panel. A multidisciplinary bunch of UK and Mexico- based researchers with a shared interest in better evaluation of complex interventions waits anxiously. What happens next? Well, believe it or not, a very honest, constructive, sharing of perspectives.
Are we talking the same language?
In short: no. But there is a long answer that is more nuanced.
First, we do seem to be talking about the same ‘things’, which is, somehow, comforting. Economists are talking (or rather, writing) long and complicated pieces to explain abstract theoretical propositions. Epidemiologists are more succinct and keener on diagrams that illustrate their thinking about ‘real-world’ interventions.
The general reason behind the use of these different ‘languages’ is that economists and epidemiologists have different incentives for evaluating interventions. For us economists, a key motivation is generating, proving or disproving theory. Frequently, an important aim of an evaluation is to test in the field which tweaks to the intervention perform better or worse (as a way of testing the underlying economic theory). This could imply that our incentives to report statistically significant results are even higher than for epidemiologists. We tend to be more open to innovative econometrics and disclose our datasets and dofiles. We do not worry too much if the interventions we look at are ideal and non-scalable, and as such not intended for use in the real world setting.
Epidemiologists are motivated by understanding if the program with the best possible design achieves its intended outcomes when implemented in the real world. They are concerned with the implementation process and missing data. They register their studies, evaluate interventions adhering to set criteria and follow rigid reporting guidelines. This allows for systematic reviews of the literature and meta-analysis, but also standardises the process and makes it much more transparent.
As an economist that has been recently drawn into multidisciplinary research, I find it hugely interesting to learn that one of the most influential papers in development economics has been ‘translated’ into ‘epi’ (epidemiologists’ language), as its quality had been questioned by the Cochrane Collaboration. Reassuringly, it appears that the results are robust to being explained in ‘epi’. This could be the ultimate proof that even if we talk in different languages, we are ultimately talking about similar things.
Are we exploring the same outcomes?
We definitely are, and that is partly what brings us together. Yet an outcome-related difference seems to be that economists tend to look at a broad range of outcomes in their quest to capture the full (intended or unintended) effects of the programs, while epidemiologists are mainly concerned with primary outcomes. Again, this complicates interpretation for economists and generally contributes to the longer papers.
Are we equally aware of ethics?
Economists are candidly accused during the debate of taking a more light-hearted approach to ethics than epidemiologists. The reasons mentioned in the lively discussion? Epidemiologists follow a more clinical tradition and recurrent crises have contributed to tighter ethics-related considerations. Also, in economics, disclosing too much information about the research program to participants might affect their behaviour, which will in turn affect the quality of the study. Finally, we all agreed with the fact that the adverse consequences for participants of social trials are more diffuse and, therefore, difficult to observe, but this is precisely why we all need to be extremely vigilant about ethical standards.
And, finally, are we using the same methods?
Well, it looks like we are. In fact, the day continues with a hands-on review of several ‘usual suspect’ quasi-experimental methods such as diff-in-diff and regression discontinuity as well as other exciting innovations such as synthetic control method and causal mediation analysis. The disciplinary backgrounds of the UK-Mexico crowd do not seem to be a determinant of the level of excitement.
So, in conclusion?
I think that realising that all this overlap across disciplines exists made me aware that, to learn from each other, engaging in an informed debate about evaluation that we can all understand is key. What do you think? Now we ‘are talking’!?!
Olga Biosca is Lecturer in Social Business and Microfinance at Glasgow Caledonian University.