The changing nature of interventions, programmes, systems and policies that require evaluation

Influences on the public’s health come in all shapes and sizes. For example, LSHTM is evaluating the impact on public health of initiatives to reduce the price of artemesinin-based combination therapies for malariain several countries, and a universal-test-and-treatment intervention for HIV prevention delivered by community health workers in Zambia and South Africa. Common to these and many other diverse evaluations are an interest in the effectiveness of these “interventions” in improving public health under “real-life” conditions and complexity in the “interventions” to be evaluated.

Establishing “real-life” effectiveness defines the new evaluation agenda. Decisions about evaluations delivered in practice must be made in partnership between implementers, evaluators, policy makers and a range of other stakeholders. Designing evaluations that accommodate the different priorities of these stakeholders is challenging – but that’s what keeps it interesting! These conditions do not preclude the use of randomised-trial designs, but clear-thinking and great-efforts are needed to conduct these. The PopART trialwill randomise 21 communities in Zambia and South Africa, home to over 1 million people, and evaluate the effectiveness of a “Universal Test and Treat” intervention in reducing HIV infections. The trial follows the landmark HPTN052 trial (Science Magazine’s 2011 breakthrough of the year) which proved the concept that effective treatment of HIV infection enormously reduces the risk of onward transmission of HIV. The PopART trial will evaluate whether this scientific breakthrough can be translated into benefits to public health when delivered through real health systems. The trial PI, Richard Hayes, has pioneered the development of cluster-randomised trialsfor such situations:  the study starts recruiting imminently.

Effectiveness evaluations often cannot deploy trial designs and researchers at the School deploy innovative designs when faced with this problem. Kara Hanson’s group evaluated the roll-out of the Affordable Medicines Facility for Malaria, using a range of techniques. Centre seminars during the last year have discussed the use of theories of change in evaluation and the complexities of evaluating programs at scale. In 2014, in partnership with the Centre for Statistical Methodology, we will run a programme of events addressing the strengths and weaknesses of a range of quasi-experimental designs. The Centre’s theme on Process and Pathways highlights also the importance of mixed-methods research to strengthen evaluations in these settings.

Intervention complexity was also the theme of a one-day symposium we held at the school in May. The range of researchers grappling with these issues was heartening to see. Peter Craig gave reflections from 5 years of working with the 2008 updated MRC Guidance on developing and evaluating complex interventions, while a range of speakers gave thoughtful presentations on their approached to dealing with the challenges that intervention complexity brings to evaluation studies.

Keeping up with the changing public health landscape and ensuring that we are asking, and answering, the “right” questions with our evaluation research is a challenge but as always, it is not enough to ask “what works?” We need to understand the conditions under which something works, the ways in which target populations are affected and the issues that may help or hinder the success of an intervention.

Back