Counterfactual: The Centre for Evaluation Blog

Quality Improvement

There are a number of projects tackling the complex issue of quality improvement evaluations occurring concurrently at the School. What diverse approaches are these projects using for quality improvement evaluation? What are the commonalities? Where do the challenges lie? We decided to bring together some of the individuals involved in projects on quality improvement and have a half day workshop to discuss some of these very issues and what next steps could be taken. At the workshop we had individuals working on five projects:

EQUIP in Tanzania ( an intervention study investigating the feasibility and community effectiveness of innovative intervention packages to improve maternal and newborn health outcomes in Africa.

INSIST in Tanzania ( a project that is working to develop, implement and evaluate the effectiveness and cost of interventions at community level (focussed on a community-based health worker) and of health system strengthening on newborn survival in rural southern Tanzania.

BHOMA in Zambia ( a stepped-wedge cluster randomised trial, evaluating the impact on mortality and service coverage of a health systems strengthening intervention that works to improve clinical care in primary healthcare facilities and improve linkages with community health workers.

IeDA in Burkina Faso ( another stepped-wedge cluster randomised trial in Burkina Faso, investigating a promising new approach of using tablet computers to guide practitioners through an Integrated Management of Childhood Illnesses consultation with children under five years old to determine the impact on correct diagnosis and treatment.

P4P ( ): is a controlled before and after study to evaluate a pay for performance programme – the use of supply-side incentives to increase health service utilisation and enhance service quality – in Pwani region in Tanzania.

What study design?

The diversity of study design was a central discussion point in understanding the projects in the workshop – from step-wedge cluster RCTS (BHOMA) to controlled “before and after” intervention study (P4P) designs of a wide range of intervention packages being applied for quality improvement.

What to measure?

The outcome measures selected by research projects to measure improvements in quality (from population based morbidity and mortality measures to qualitative reports of improvements from health workers) were clearly a challenge in designing quality improvement evaluations. What subjective or objective indicators can be measured that represent a genuine improvement in quality? And when should it be measured? There is often a disconnect between what evaluation plans think should be measured and what service providers perceive as their own priorities. The nomenclature on quality is difficult to navigate. Quality could be measured purely by objectively verifiable indicators of quality (inputs, process, outputs) or through clinical observation (are staff following guidelines?). Alternatively, these could utilise more subjective measures of the end-users perception, but end user perspectives may conflict with improvements in coverage of interventions or service providers’ perspectives of the situation. For example, where general experience of care is poor, expectations may be correspondingly low and therefore not helpful in highlighting quality gaps. Furthermore, in many settings there may be a power imbalance between health care provider and end user which may influence their ability to make objective judgements. With respect to the actual data use, the general experience of different projects showed that health facility and district teams working on quality improvement projects were usually thrilled to have local data that is otherwise unavailable to them for monitoring their projects.

Given the duration of many of the projects, there will often be other interventions (such as changing guidelines, or other research programmes) taking place in some or even all of the study areas. These need to be recorded, but how best to account for them in the analysis is often unclear.

When to measure?

A number of projects drew attention to the issue of timelag between interventions and impact. Some of the interventions discussed varied in duration of implementation from 30 months to 5 years. EQUIP was designed to have continuous data collection, but, especially for maternal and newborn health interventions, this may have been too short to measure behaviour change or changes in morbidity and mortality. Furthermore, the quality indicators used have a tendency to bounce around if the sample size is not large enough, which for relatively rare conditions will be a common problem. The indicators can also change at different (and sometimes unexpected) rates. These time lags and lack of precision in some of the indicators can lead to teams feeling demoralised if they feel that their hard work is not quickly reflected as changes in the indicators.

Other challenges:

We need to come up with good strategies for transforming data into knowledge that is useful for service providers. Different teams struggled with the resource heavy mentoring needed. Measurement teams and mentoring for quality improvement are essential – but they can be resource-intensive and appropriate mentors can be difficult to identify due to workloads and conflicting engagements. In some settings change was seen rapidly, but difficult to sustain over longer periods of time.

What next:

It was universally agreed that knowledge sharing and programme learning were valuable in tackling some of these issues given the diversity of options for quality improvement evaluation. The workshop also showed a wider desire for harmonisation of quality indicators. We have agreed to organise a larger LSHTM meeting on the evaluation of quality improvement interventions and involving more individuals in the exciting discussion on these topics (watch this space).

Blogs Archive

Leave a Reply