Blog

Counterfactual: The Centre for Evaluation Blog

Inter-disciplinarity –  the new norm for public health

My blog last term on the changing nature of evaluation research highlighted an increasingly important aspect of global public health research today: inter-disciplinarity. Interventions, programmes and policies that aim to improve public health, or have unintended or secondary consequences in this area, are often neither designed nor evaluated only by those from “within” public health. This poses challenges to those of us whose primary concern is evaluating key influences on public health outcomes. It forces us to step outside of our comfort zone and seek wisdom from other disciplines such as economics, agriculture, urban-planning and psychology.

As public health practitioners, we need to understand how often complex “interventions” emerging from outside of public health operate in practice. Such “interventions” might be as diverse as free bus travel for young people or the city of London hosting the Olympics. Being a public health evaluator often requires first learning about how those influences on public health we seek to study are delivered and responded to.

Interdisciplinary debates about evaluation methods have been alive and well within public health for as long as anyone can remember. For a nice exchange on a recent attempt to bridge the oldest divide of them all – to trial or not to trial – read Chris Bonell’s excellent paper on “realist RCTs” and the subsequent exchanges. Chris will be speaking at a Centre for Evaluation seminar on the 21st January. Another view is put forward in Paul Farmer’s Lancet blog challenging the use of RCTs to evaluate interventions when they are being transferred and scaled up in low-income settings. This history of debate within public health, and the huge range of methods, mixed and otherwise, already used within our field are a valuable part of our history, and of enormous value to other disciplines.

Inter-disciplinary research is essential to tackle the major questions we face in public health. One area of growing importance in this regard is to get better at the science of generalisability. Conducting evaluations in such a way as to be able to inform the more effective scale-up and transfer across settings of promising strategies will require multiple perspectives. An excellent half-day “Generalisability symposium” this term brought together mathematical modellers, epidemiologists, economists, sociologists and anthropologists to discuss these issues. Both concepts and tools were explored and further development is planned in the coming year.

We must understand too how public health interventions are evaluated by those who don’t speak the same (technical!) language as us. The Centre for Evaluation has taken a particular interest in the growing overlap between evaluation studies of public health interventions reported on the one hand by economists and on the other by epidemiologists and public health researchers. Alongside trials of medicines, public health interventions are increasingly reported in journals using the CONSORT statement which has been extended for use with complex interventions. Conversely, important RCTs on health issues conducted by economists are often published in working papers in very different formats. Just as within epidemiology, there is diversity within economics. The esteemed Princeton Professor Angus Deaton said at LSHTM in April that Epidemiological methods are having a large and largely unhelpful effect on statistical practice in economics”. Conversely, there is a growing group of economist-evaluators making arguments within their own constituencies about the benefits of insights from biomedicine and public health methods.

Trivial academic concerns? Perhaps – but these disciplinary divides underlie a fierce debate about the value of deworming that has ignited this year in the BMJ, while Richard Horton denounced as a “glorious failure” a recent attempt to co-publish systematic reviews in health and development journals. These communication breakdowns are not helpful to the populations we serve as evaluators and progress is needed.

The Centre has much to offer in engaging in these debates: we can support efforts to explore the utility of initiatives such as CONSORT in other fields, and we can share the expertise and experience in adopting plural approaches to evaluation that we have accrued in public health. We have much to learn too and many questions we need to ask such as: How can different disciplines learn from each other? And, How can we work together to improve our methods for evaluating increasingly complex intervention packages? I invite you to comment on these and other questions raised within this blog.

Blogs Archive

Leave a Reply

Your email address will not be published.

* Copy This Password *

* Type Or Paste Password Here *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>