At Evidence Action, we do not typically measure final impacts when we implement a program at scale. By “impacts” we mean the metric of ultimate interest - the real reason we are doing what we are doing. We don’t measure whether households with Dispensers have less diarrhea or child mortality. We don’t measure whether children that get dewormed attend school more or have better cognitive scores. We measure whether people use chlorine and whether worm infection levels fall.
Measuring “means” rather than “ends” could be a controversial stance in an NGO community where M&E teams pride themselves in always measuring ‘impact.’
We think we are doing the right thing. Here’s why.
We use data every day. It’s critically important to our work. As an evidence-based organization, we rely on high-quality, timely and systematic measurement of inputs, outputs, and outcomes to make decisions about our work, and evaluate our progress. We measure to make decisions; we choose our methods depending on the question we want to answer.
So what data that we collect and use, and why?
Evidence Action works on programs for which there is a solid evidence base of positive impact, often in the form of randomized control trials. We develop the business models to scale these evidence-based programs so they benefits millions of the most poor and marginalized people. Evidence Action doesn’t work exclusively on a particular sector, like water or microfinance, or has a commitment to a particular kind of service delivery model. We are guided by evidence of impact first. So, what do we consider when assessing interventions to explore or support?