Re-analysis of data draws different conclusions from Kenya worms trial

James Hargreaves,  Director, Centre for Evaluation

Today, LSHTM Centre for Evaluation Researchers release the findings of an important three year project.

In research sponsored by 3ie (The International Initiative for Impact Evaluation) we replicated Miguel & Kremer’s famous trial, conducted in Kenya in 1998-1999, which suggested that school-based deworming improved school attendance among both children in treated schools and, through “externality” effects, among children in nearby schools even if these did not actually receive deworming.

The findings are published as two new papers in the International Journal of Epidemiology.

In the first new paper, researchers used the same methods as the original study to re-analyse the trial data from Kenya. The researchers found calculation errors in the original authors’ data which meant there was no longer evidence that deworming caused an increase in school attendance among children who attended schools near to the schools where children were treated.

In the second new paper, researchers re-analysed the school attendance and examination data with methods commonly used by health researchers, rather than those used by economists as in the original study.

You can read the original papers’ authors (Hicks, Miguel and Kremer) response to our work and our final conclusions.

There is also a blog article in the Conversation  by Calum Davey and James Hargreaves reflecting on the work.

An updated Cochrane review on the effects of deworming that considers both our analysis and the responses of the authors is also released today.

Interested parties will want to carefully consider these papers and make up their own minds about the findings.

In short, following careful re-analysis of the data we draw somewhat different conclusions from the Kenya trial than the original authors.

While inevitably many findings were the same, two differences are especially noteworthy.

First, when coding errors in the original authors’ files were corrected, the evidence for “externalities”, that is benefits from the deworming programme among children from schools that were not themselves dewormed, became much weaker.

Second, our different statistical approach to the data revealed substantial missing data and other patterns that led us to conclude that the trial findings, though suggestive of increased school attendance in dewormed schools, were at high risk of bias.

The mission of the Centre for Evaluation is to improve the design and conduct of public health evaluations through the development, application and dissemination of rigorous methods, and to facilitate the use of robust evidence to inform policy and practice decisions. This project has important relevance for all three of our methodological themes.

Our Design and Analysis theme develops and applies novel and rigorous methods for the evaluation of public health interventions, including randomised controlled trials.

We recommend that the new wave of randomised trials in development should be improved by adopting standards from the medical field.

This is already happening: pre-analysis plans are becoming more common. Adopting – or adapting – the medical field’s standards for trial reporting will lead to more transparent conclusions and reveal risks of bias.

Our Implementation, Pathways and Consequences theme focuses on methods that help unpick what aspects of the delivery of an intervention were most important.

We recommend that implementation science and process evaluation within evaluations are also strengthened.

It is often over-looked that the Miguel and Kremer intervention included a range of additional components such as school-based education on deworming.  Wider adoption of the recent MRC guidance on process evaluation of complex interventions would strengthen the new wave of development evaluations

Finally, our Knowledge Synthesis and Evidence Translation theme helps policy makers appraise the full evidence base and make policy decisions. Our reanalysis of this one trial alone should not drive policy, and we do not make policy prescriptions on the basis of our work.

We recommend that systematic reviews are used as the best way to inform policy and avoid hype.

It is appropriate that both our work and the responses of the original authors have been considered in this way in the new Cochrane Review.