Workshop reflections – balances and trade-offs in evaluation of diverse portfolios.

The LSHTM Centre for Evaluation recently ran a workshop for three teams to present their respective plans for evaluation of large, diverse intervention portfolios. This was a terrific forum for collaborative learning in this complex emerging area for stakeholders involved in funding, implementing and evaluating a range of interventions across diverse settings.

Three evaluation teams, each currently responsible for coordination of portfolio-level evaluations for the below listed programmes were represented;

  • The DFID-funded Health Partnership Scheme for over 200 grants for projects in 30 countries, working in areas from maternal health to mental health to biomedical equipment management. (See http://www.thet.org/health-partnership-scheme) From 2012- 2017 this is funded and supported by the Tropical Health & Education Trust, a global health organisation that runs and supports health partnerships between health institutions in the UK and in developing countries.
  • The ‘Saving Brains’ portfolio, a Grand Challenges Canada funded partnership which has awarded grants to 107 teams in low and middle-income countries to develop sustainable and scalable ways of promoting and nurturing healthy brain development in the first 1000 days after birth. (See grandchallenges.ca/saving-brains/, with the evaluation team based within march.lshtm.ac.uk )
  • The Global Mental Health Platform, a Grand Challenges Canada funded partnership which seeks to improve treatment and access to care for mental disorders which have potential to be sustainable at scale in low and middle-income countries. (grandchallenges.ca/grand-challenges/global-mental-health/, with the evaluation team based within www.centreforglobalmentalhealth.org/ )

Presentations of evaluation plans from each group followed by audience feedback generated useful discussion on common challenges experienced, the role of portfolio-level evaluation from different stakeholder perspectives and possible methodological approaches that could be taken.

Across the portfolios described, there were common challenges, most obviously the size and heterogeneity of work represented in terms of both interventions within it and the settings in which these were implemented. There were also significant differences in organisational structure, content of work across the different portfolios and planned evaluations.  A range of challenges including limitation of outcome and impact data, data management and logistics across multi-stakeholder platforms were discussed.

In considering how to structure evaluation of such diverse areas of work, the early development and use of theories of change with relevant associated indicators was discussed as important to implementation as well as evaluation.

Consideration of perspectives of different stakeholders with regards to the overall purpose of portfolio-level evaluations also provided a useful lens for exploring different methodological approaches. In particular, recent narrative from Nigel Simister of the International NGO Training and Research was highlighted.1 Specifically, Simister points to the difference between summarisation of portfolios for accountability and communication purposes, often a driving agenda for funding agencies, compared with detailed evaluation for cross-portfolio learning which is more often a driver for partner organisations.1 Discussion around the implications of both in terms of related methods was highlighted as below;

Primary aim of evaluation Communication & accountability Evaluation of cross portfolio learning
Summarisation methods
  • Aggregation
  • Emphasis on simplicity
  • Complex evaluationmethods
  • Details focused

Inevitable trade-offs between these two approaches, especially when taken to their extremes, was highlighted.

The need to consider portfolio level data at different levels was also noted. Specifically, in absence of impact and outcome data, summarisation may focus on output and activity levels, although detailed evaluations for research purposes will usually emphasise outcome and impact measures.

There was also recognition that important lessons are often not captured by quantitative methods alone with many important steps on a pathway towards impact requiring qualitative assessment. Hence there was strong emphasis on supplementary qualitative methods and useful discussion of various approaches to guide these, depending on overall aims of evaluation. Several of us are also keen to explore the potential for Qualitative Comparative Analysis within this line of work.

The need to potentially evaluate ‘added value’ of portfolios, beyond immediate beneficiaries, including, for example, potential contributions to global funding and research networks in specific areas were also discussed. In addition, the potential value of incorporating research already being conducted within projects that comprise the portfolio was considered.

Like most constructive multi-disciplinary discussions, the workshop raised at least as many questions as it answered. Many challenges in evaluating diverse portfolios were highlighted and the reality of inevitable trade-offs involved in navigating competing priorities was highlighted. The opportunity to begin to discuss these issues with colleagues working in similar areas was invaluable and while challenging, it is a privilege to be working to try and better understand lessons learned from recent work in each of our respective areas to improve the health and well-being of vulnerable populations worldwide.

Of a number of ‘pearls’ from this workshop Einstein’s well known quote in particular resonated and has found its place on our office wall for at least the duration of this evaluation. “Everything should be made as simple as possible, but not simpler.” Albert Einstein

Back