What can a study in Kenya tell us about the effect of deworming on school attendance?

Researchers at the London School of Hygiene & Tropical Medicine recently published a re-analysis of an influential study estimating the impact of deworming drugs on educational outcomes. Their conclusions about what the study is able to show differed from that of the original authors, who published a response in the same journal. The LSHTM researchers have defended their conclusions, and many commentators have weighed-in on the subject.

Lead author Dr. Alex Aiken presented the results of the re-analusis at an open lecture at the School on the 22nd of September. The lecture (30 mins) and questions are available to watch here.

Alex said that he and the other authors – Calum Davey, Dr. James Hargreaves, and Professor Richard Hayes – came to this project because they were interested in methods of evaluation. He said: “we didn’t have specific interest in deworming in schools, but we knew about this study and we knew that the Cochrane Group had graded it as at ‘high risk of bias’ in their systematic review in 2012 so we wondered if perhaps the paper had been misunderstood in some way. We came to this with an open mind.” Alex described the study as a quasi-randomised stepped wedge trial, where 75 schools in Kenya were allocated to three groups: a first that received deworming treatment and health education in 1998 & 1999, a second that received the treatment in 1999 only, and a third that didn’t receive the intervention until after the study was over.

There were two stages to the re-analysis: (1) re-analysis of the same data with the original methods, which could be thought of as a ’scientific audit’, and (2) re-analysis of the same data with modern methods based on an adaptation of the CONSORT guidelines for cluster randomised trials. This second stage was based on a pre-analysis plan, published online in 2013 which Alex said, “we didn’t deviate [from] in any meaningful way.”

The first stage – the ‘scientific audit’ – found most of the figures in the paper to be correct. Two discrepancies stood out. One, that the reported effect on anaemia was not, as originally reported, supported by any statistical evidence. And two, that the reported effects of the intervention on schools nearby to treated schools could not be replicated for the 3-6km distances, and hence the overall spill-over effects as originally calculated, because of an error in the computer code. During questions Alex was asked to comment on further work by the authors of the original paper that conclude that the spill-over effects exist at other distances, and he said, “we set out to analyse the effects as described in the original study… we are aware that the original authors have now published some new analyses using alternative distances… well, that’s a different analysis.”

The overall conclusions of the second stage of the re-analyses was that while there was some evidence of an effect on school attendance, there was a high risk of bias. There was no evidence of effects on health outcomes (asides from reducing worm infections), or on academic attainment measured using exams in English, Maths, and Science.

The risk of bias in the attendance effect was based on a number of sources. Attendance was measured by field-workers visiting the schools. There was a lot of missing data, including, “quite a lot of planned visits didn’t actually happen, [which was] a particular problem in the second year of the study, and especially in Group 2.” The Group 2 schools were the ones that ’crossed over’ from control to intervention between 1998 and 1999. Furthermore, there were some strange patterns in the data that suggested that the measured attendance at a school was related to the amount of data collected in the school, and that this overall relationship changed from one year to the next, thereby changing with intervention status because of Group 2 crossing over.

But the major concern for Alex and his colleagues was what happened when they tried to treat this stepped wedge trial as a pair of parallel trials: an approach that reduces the risk of bias from the fact that schools in Group 2 cross over from control to intervention. They found that the results from combing the data from the two years was inconsistent with the more robust within-year effects. During questions it was suggested that the combined year results looked, “odd”, and “it would seem that the [combined year model] is making assumptions that do not hold in practice.” This issue was picked-up in the symposium on Stepped-wedge trials that followed Alex’s presentation.

Alex said that he hoped that more people would do reanalysis in the future. He said “an enormous thank-you and expressing our great respect to the original authors who went hugely out of their way to help us with our re-analysis work and to congratulate them on their bravery in offering up their data to some people that they don’t know at all to pick-over and pull apart. They deserve great credit.”

Read the BBC news article covering this study here.