At Evidence Action, we do not typically measure final impacts when we implement a program at scale. By “impacts” we mean the metric of ultimate interest - the real reason we are doing what we are doing. We don’t measure whether households with Dispensers have less diarrhea or child mortality. We don’t measure whether children that get dewormed attend school more or have better cognitive scores. We measure whether people use chlorine and whether worm infection levels fall.
Measuring “means” rather than “ends” could be a controversial stance in an NGO community where M&E teams pride themselves in always measuring ‘impact.’
We think we are doing the right thing. Here’s why.
For some time now, evidence-based development has been all the rage. Rigorous evidence about whether an intervention or program works, and for whom and why -- and, by caveat, whether aid money is effectively spent -- is a growing focus of attention. We have seen tremendous growth in so-called impact evaluations of social development interventions and policies to understand whether they work, and significant interest in considering rigorous evidence in making program and policy decisions. This is a welcome and important trend.
But it is easy for this conversation to miss an important element of evidence-based development: How do programs and policies that have been proven to work based on rigorous research studies, in fact, reach millions if not billions of people? What is that path to scaling what works to people who need it most?