Data use in large-scale improvement projects; The experience of project fives alive!

In March 2015, I presented aspects of Project Fives Alive!’s work at the Consortium of Universities in Global Health meeting in Boston.

Advertisement

A curious colleague inquired how, given our reliance on routine national data systems, we could vouch for the quality of our data. 

The decision whether to rely on routine national data sets or create special parallel data sets is one that will continue to plague implementers of large-scale projects for a long time to come. 

In reality, the two options represent different sides of the same coin, with advantages and pitfalls in equal measure. 

Fortunately, Ghana’s Project Fives Alive! (PFA!) has tasted both sides of the data coin. 

For almost eight years, PFA! has rapidly scaled up a national Maternal, Newborn and Child Health (MNCH) project using a quality improvement (QI) approach. 

From 25 sub-districts in 2008, the project has cumulatively worked in 544 sub-districts as of 2015. PFA! has also cumulatively worked in 202 regional and district hospitals, representing almost 80 per cent of public hospitals in Ghana. 

In all these sites, the project has worked with over 700 multidisciplinary QI teams formed by managers to test local solutions for improving the quality and reliability of care for children under five. 

The impact? – 31 per cent reduction in under-five mortality in 134 hospitals; 37 per cent reduction in post-neonatal infant mortality in 134 hospitals, and 35 per cent reduction in under-five malaria case fatality in 134 hospitals as of November 2014. 

Presenting such impressive results on a large scale often raises legitimate questions from well-meaning colleagues about the credibility and validity of the data sources.  

The beginning 

Perhaps it is best to start from the very beginning in 2008 when, from three districts in Northern Ghana, the project and its evaluation teams originally collected parallel data sets to validate its intervention, including some processes being tracked with non-routine data. 

Two main issues emerged. First, at full scale we would have neither the time nor the resources to continue collecting data in this tedious manner, especially given the project’s strategy of continuously monitoring data over time. 

Second, and perhaps more seriously, the project ran into its first sceptic – a national officer who could not understand why and how process improvements allegedly being recorded and reported by the project did not find expression in the routine data system. 

Of course, we do know of the lag between lower-level processes and their system-level outcomes that one often sees in such work. 

In 2009, upon rapidly scaling up high-impact interventions to all the then 38 districts in the three regions of the north, Project Fives Alive! commenced direct usage of the routine data system – the District Health Information Management System (DHIMS). All the right reasons were adduced for this decision – alignment, promoting use of local data for decision making, strengthening national data systems, etc. It was, however, not as smooth sailing as it sounds. Internally and in many other fora, legitimate questions arose about the timeliness, completeness, and accuracy of the data reported in the routine system. 

Progress

Having decided to strengthen the national data system rather than creating a parallel one, the project moved to the next logical step of writing and implementing a protocol on improving the quality of the routine data system. 

To improve accuracy, for example, our monitoring and evaluation officers, working in partnership with trained health information officers, compared source data in facilities to data reported to DHIMS and worked to close the gaps. 

Faced with even more facilities at national scale, we had no option other than to team up with other national projects equally interested in relying on and improving the quality of the routine data system. 

Under the leadership of the Monitoring and Evaluation Unit of the Ghana Health Service, therefore, PFA!, MalariaCare, and the National Malaria Control Programme have, since 2013, rolled out an adapted protocol for continuously improving the timeliness, completeness and accuracy of the routine data system. 

There is data to show how data quality gaps are continuously being closed by national, regional and district health information officers and biostatisticians. 

Advertisement

Even so, questions about data reliability remain, leading to one senior researcher in Ghana commending PFA!’s reliance on DHIMS but also suggesting that such large-scale projects determine and state clearly the margin of error on the data being reported. 

Perhaps, as suggestions go, improvement scientists could use another. 

I am familiar with improvement scientists being so focused on improving processes and outcomes in various care pathways that they sometimes neglect critical questions about data validity that will come back to haunt them once breakthrough results are reported. 

Often, we QI practitioners say, running the next Plan-Do-Study-Act (PDSA) improvement cycle is not an experiment – collect just enough data to tell us if the changes being tested are leading to improvement. 

Advertisement

In my experience, this works for exceedingly small-scale projects. 

However, for large-scale improvement work, the time when the project plans to publish its work in peer-reviewed journals is the only time that critical questions about sample size used, extent of randomisation, indicator definition, etc., all emerge very strongly. At this point, what might have started as an exciting adaptive process starts being measured against rigorous traditional research. My suggestion: Prepare for the day of reckoning from day one! 

Evaluation

This leaves us the option of independent evaluation of one’s work. Within Project Fives Alive!, this has been done through periodic surveys and in-depth analysis conducted by the University of North Carolina at Chapel Hill and the University of Ghana, ISSER. Finally, if one is as lucky as PFA!, the beginning and end of a large-scale project may coincide with an independent national survey like the Ghana Demographic Health Survey conducted by the government of Ghana. 

In April 2015, seven years after the country’s last Demographic Health Survey (DHS 2008), which coincided with the start of PFA!, Ghana again released the latest DHS results, coinciding with the end of the project. 

Advertisement

Given the project’s overall aim to assist and accelerate Ghana’s efforts to achieve MDG 4, we keenly awaited these results. 

The new DHS shows under-five mortality in Ghana reducing from 80 to 60 per 1,000 live births; child mortality (children surviving to age 12 months) reducing from 31 to 19 per 1,000 live births; infant mortality reducing from 50 to 41 per 1,000 live births, and neonatal mortality reducing from 33 to 29 per 1,000 live births. 

We could not help noticing that at the time DHS was conducted and now recording a 25 per cent  drop, we ourselves were reporting a 28 per cent  drop in under-five deaths. 

We greatly rejoiced, knowing that PFA! had contributed modestly to these improvements. 

 

Sodzi_tettey@hotmail.com

www.sodzisodzi.com

Connect With Us : 0242202447 | 0551484843 | 0266361755 | 059 199 7513 |