When the Government of Kenya closed schools and implemented social distancing guidelines on March 15th to prevent the spread of COVID-19, one of our flagship programs – the Deworm the World Initiative – had to rapidly adapt to the new conditions on the ground.
The Deworm the World Initiative provides technical assistance to the governments of Kenya, India, Nigeria, and Pakistan for the distribution of deworming medicine to children through schools. Many aspects of these deworming programs – including the delivery platform and timing of treatment – are being modified in response to COVID-19.
In Kenya the school shutdown took place just after treatment finished in many counties, requiring quick changes to how we monitor the deworming program. As part of our technical assistance, we review data collected by governments in their program reporting for each treatment round. A key indicator for deworming programs is treatment coverage: a measure of the proportion of children within the target population that receive the deworming medicine at schools. This indicator is our most immediate measure of success and helps us determine whether the program is reaching a sufficient proportion of the target population – ideally at least 75% coverage, a threshold recommended by the World Health Organization. When this coverage threshold isn’t reached, we then work with the government to determine the barriers to treatment and make adjustments to the program accordingly.
The treatment coverage data is collected by teachers – who record the information as the children receive the drug – during each school-based deworming round. The data is then aggregated from the school-level up to the sub-national and national levels.
During this process, however, the data is not always accurately or reliably collected due to human error or aggregation issues. To review for potential discrepancies, we perform an independent, secondary verification of the coverage using a method called coverage evaluation surveys based on WHO guidelines. These surveys usually consist of in-person interviews with a sub-sample of the children at their homes and schools to determine if they received and ingested the medicine.
The surveys we conduct, either using our own staff or through external agencies, are performed during the weeks immediately following the deworming round. We then analyze and compare the results to the treatment coverage collected by the teachers, in order to evaluate the accuracy of the government’s coverage rate for further program improvements.
A new approach
This year, we piloted a new approach for our coverage evaluation surveys in Kenya due to the COVID-19 pandemic.
The National School-Based Deworming program in Kenya is implemented in two annual waves, about 3 months apart, which each cover about half of the targeted counties. The first of these waves was conducted at schools in 14 counties on March 11th, 2020, and we had planned for the coverage evaluation surveys to begin at the end of March. However, as the number of COVID-19 cases increased in the country, the Government of Kenya decided to close schools and implemented social distancing guidance on March 15th.
In an effort to protect our staff and the communities, we made the decision to suspend in-person interviews and change our methodology for coverage evaluation surveys. We began by reviewing the best practices of similar organizations and programs that use remote data collection, in order to maintain social distance.
Based on the research, we developed a new methodology, did some early risk analysis for potential challenges, and decided to pilot a phone-based coverage evaluation survey. Kenya was a perfect place to try this new strategy since 97% of adults report owning or sharing a mobile phone, and most people are accustomed to use them in their daily lives, according to a report by the Pew Research Center.
The new methodology was not without challenges. Our usual coverage evaluation survey requires us to speak directly with children, which wasn’t tenable given limitations in their access to mobile phones – as well as the sensitive nature of reaching out directly to minors. Instead, we opted to speak with parents, who would then ask the survey questions to their children.
We began by adapting the questions from our in-person coverage evaluation survey. In order to encourage participation and decrease cost to respondents, some questions were dropped to keep calls under 30 minutes. However, the key household demographic questions for parents were included, and questions about children, such as if deworming occurred, and if children received and swallowed the deworming drug, were adapted to be posed to children by their parents.
The sample for the survey consisted of 2,250 parent interviews, selected from communities in two randomly selected counties, Narok and Siaya. The contact information for parents was gathered through a method called non-random snowball sampling. We reached out to teachers and school-board members to gather contact information for parents of both enrolled and non-enrolled children, and collected further contact information through the parents that were surveyed. All respondents were given airtime credits as an incentive for completing the survey.
Given the methodological changes that were necessary to conduct the survey over the phone and with parents, we recognized that there was a risk that certain biases could be introduced into the data.
The first is recall bias, where the parents and children might not remember accurately the events of the deworming round at their school. Normally the surveys are conducted a few weeks after the deworming round; however, with the methodological changes, the phone survey was conducted two months after deworming. To address this, we trained our callers to encourage parents to ask their children for information while completing the phone survey and respond with ‘I don’t know’ if parents did not have the information or were unsure about their child’s experience. We found that the number of ‘I don’t know’ responses were few, which could indicate a limited impact of this bias.
The second is a potential selection bias of the sample of parents chosen for the survey. Because the sample was built by asking teachers for referrals of parents, it was likely we contacted parents who have a closer interaction with the school and teachers. This could mean that the average parent from this sample was more likely to give consent or encourage their children to be dewormed than the average parent in the given communities. However, by encouraging callers to ask for parents of both enrolled and non-enrolled children during the snowball sampling, we found that there was a robust proportion of parents with non-enrolled children included in the sample, who are likely less involved with or knowledgeable of the school.
The third is a social desirability bias. Since the parents were the ones asking and reporting the questions, they might have given more socially desirable answers to the surveyor, for instance to state that their child took the deworming pill when they actually hadn’t. In the past, our team has experienced challenges with this type of bias when conducting other phone surveys. Once we receive the government coverage data, we might be able to determine if it occurred and the magnitude of this bias.
The phone survey took place throughout May. The data was then cleaned and analyzed to understand coverage estimates and the effectiveness of the survey. Normally, we would then compare the results of the survey with the data obtained during the deworming rounds. However, due to delays from COVID-19, the treatment numbers collected by teachers are not yet available.
While we wait for the government data, we sought to gather insights into the effectiveness of the pilot by comparing the results to historical treatment coverage data from these counties from 2013 to 2019. While these data sources are different, historical data can provide context and general comparison. As a baseline, since we adopted the WHO coverage evaluation survey guidelines in 2018, coverage validation and treatment coverage rates have differed by approximately 5 percentage points, on average.
We found that the remote coverage evaluation surveys are consistent with previous treatment coverage rates in Narok and Siaya counties. In Narok, the average coverage rate in the previous seven years was 82%, with a high of 89% and a low of 76%; the coverage validation rate from the phone survey was 83%, which fell within that range.
Similarly, the average treatment coverage rate in Siaya from 2013-2019 was 79%, with a high of 84% and a low of 76%; the coverage validation rate from the phone survey was 83%, which while at the high end of this range, falls within historical data. It was also within the range of a previous coverage validation survey we conducted in Siaya county after the 2018 deworming round, where we obtained a rate of 88%, compared to the treatment coverage rate of 82% that had been reported in the county.
While these comparisons with historical data are not conclusive, they do suggest reliability of these results, pending review of the 2020 treatment coverage data.
Replicating the methodology in other geographies, however, requires certain conditions to be present in order to expect a comparable degree of success. First, we were able to conduct the pilot due to the wealth of high quality data collected over eight years of program implementation, which significantly eased our ability to make contact with the teachers and parents, and was necessary when we conducted the historical comparisons to analyze and understand the results. Second, our team has a strong relationship with the Kenyan Ministry of Education, which streamlined approval of this activity, and allowed the callers to leverage this partnership during phone interviews. Third, Kenya has achieved a significant degree of mobile phone penetration, which is crucial to conduct phone surveys and greatly eased achievement of target samples, as well as reduced non-response rates. Finally, the National School-Based Deworming program has been implemented in Kenya for eight years and is well known and trusted by the communities, which likely increased the participation of the parents during the survey.
Based on the results from the comparison with historical data and the review of potential biases, our team believes that the implementation of the pilot was largely successful. We are currently evaluating the expansion of the use of remote monitoring strategies to other geographies, due to the cost-effectiveness of the pilot and the continued need for these types of strategies due to the COVID-19 pandemic.