Evidence Action Beta is our in-house incubator of promising, evidence-based innovations. Beta exists to design, prototype, test and deliver measurably impactful, cost-effective programs that are ready for scaled implementation by Evidence Action and our partners.
No Lean Season aims to reduce the negative effects of seasonality on the poorest in rural agricultural areas by enabling labor mobility that increases incomes. It is a new program that we are testing in Evidence Action Beta's portfolio.
We give a travel subsidy of $19 to very poor rural laborers so they can send someone to a nearby city to find a job during the period between planting and harvesting. This is the time in rural areas when there are no jobs, no income, and when families miss meals.
With a temporary job during this 'lean season,' households are are able to put an additional meal on the table for every member of the family each and every day. That’s 500 additional meals during the lean season.
We are investigating several critical questions to pressure test the hypothesis that alternative strategies to food aid may be effective and cost effective means of providing seasonal income support in Bangladesh and elsewhere.
Winning Start is an Evidence Action Beta project designed to improve literacy and numeracy among primary school students. Youth volunteers target struggling learners with interactive educational activities that complement classroom learning and are tailored to students’ skill level.
Multiple randomised controlled trials conducted in India, Ghana, and Kenya have consistently shown that using lightly trained volunteers to deliver remedial sessions to students improves learning outcomes.
With Winning Start, volunteers also benefit. By facilitating remedial sessions and engaging in community projects, youth volunteers receive a platform to develop the professional development skills that will support their transition into the workforce. Governments win too: Winning Start can be adapted to serve different aspects of countries' national agenda. It gives Governments the opportunity to invest in youth, while pursuing multiple other potential public gains.
In 2014, Evidence Action partnered with the Government of Kenya to pioneer the Winning Start model through G-United - a national youth volunteering program implemented by the Government of Kenya. Now in it’s third year of operation, G-United continues grow — offering exciting learning opportunities as it does.
Our approach to translating rigorous evidence to impact at scale includes four key phases:
We identify rigorously-vetted ideas with potential for impact, mechanisms that may be needed to make them work in the real world and at scale, enabling environments where particular ideas are most likely to succeed, and potential implementation partners to execute ideas. We have designed several tools that we use to do this, including:
Our sourcing activities typically yield a plethora of ideas and prospective opportunities. Since we cannot pursue all these leads at once, we maintain a knowledge bank: a repository of all the ideas and prospects that emerge from our evidence and context scanning, and our conversations with researchers. We expect the information in our knowledge bank to accrue interest over time—as more research is generated and as contexts change, for instance—and become more applicable in future. Our knowledge bank keeps us ready to quickly capitalize on future opportunities.
Ultimately, the goal of our sourcing activities is not to find ‘perfect’ ideas over which no outstanding questions remain and no potential implementation challenges exist. Instead, information gained about specific ideas informs not just the decision about whether to progress the idea through the pipeline, but the design of potential prototypes through which we can test promising ideas.
Once we identify what works in principle, we consider how to make it work in practice. Through a deep-dive into the literature supporting an idea, conversations with the authors of this literature, our own “design-thinking” research, and small scale testing of prospective elements—such as messages, behavioural nudges, and hardware/tools—that may go into a program, we begin to conceptualize a prototype. We bring together the most promising elements and element-combinations to design a viable model for implementing evidence based ideas. Prototypes are therefore not exact replicas of research studies. They are designed to remain faithful to the essential elements of the original study and theory of change, but take into consideration complementary evidence and practical implementation constraints.
During our prototyping phase we also assess the scale potential of an idea, develop an initial cost-effectiveness model and build a learning agenda that details the outstanding questions we have which will need to be investigated through testing.
During this phase, we apply the conceptualized implementation model from phase two as a working, testable prototype— a project that reaches several thousand users. We review the operational model and:
We also develop the necessary political relationships and alliances with local partners, such as governments and implementing partners.
Testing at scale: Once a functional and scalable implementation model is in place, we expand the project to reach tens of thousands of users and rigorously evaluate it for impact at-scale. One of our goals is to answer questions that can only be understood by evaluating an intervention at-scale—such as how the project might influence market dynamics—in order to gain a better grasp of the project's social impact and cost-effectiveness. During this final phase of program development, we also secure the necessary financing, ideally over several years, to grow quickly and across locations. In parallel, we recruit the right talent to support the rollout and growth of projects that prove impactful. Finally, we may adapt and apply the project to new geographies and begin testing it in those new contexts.
As ideas evolve and move through these phases, it is our full expectation that some of them will fail. We learn from those that do. Meanwhile, innovations that demonstrate consistent impact and prove scalable and cost-effective are scaled-up to improve the lives of millions of people.
Evidence Action maintains a pipeline of promising innovations, all of which are at different phases of our testing process.
At each phase of exploration, these interventions are examined along four key criteria to determine whether they should advance through the pipeline or be exited. We apply these four criteria throughout our program development process, giving greater or lesser emphasis to certain criteria depending on what is appropriate at each phase of innovation and testing. Our filtering criteria include:
Impact: We look for ideas that are backed by research. As we use these ideas to build testable prototypes, we also consider impact more comprehensively: does the initial impact registered in research hold in the real world? In a new context? Over time? When implemented at scale: how does an idea impact economic markets (general equilibrium effects) or third parties (externalities) measure against its impact on beneficiaries?
Cost-effectiveness: We strive to find solutions that have the maximum impact for every dollar spent. Cost-effective programs have a high value for money and are better than alternative solutions at achieving a measurable outcome. We use cost-effectiveness analysis to provide insight on the relative costs and effects of different interventions, which ultimately inform potential program impact. We also use cost-effectiveness analysis to provide general comparability of alternative interventions, and help inform priorities for resource allocation.
Scalability: We only invest in delivering services that have the potential to reach millions of people. Interventions that are, intrinsically, impossible to scale, or that do meet a real need faced by millions of people, do not make the cut. If, for any reason, we find we (or other implementing partners) cannot deliver a particular intervention at scale, we exit it from our pipeline.
Strategic fit: Evidence Action recognizes that some projects are better suited for our organization than others. We account for our strengths and limitations when deciding whether to pursue an idea, and we prioritize projects that align with our organizational strategy and infrastructural capacity, including those that have the potential to maximize the impact of our existing programs, or which leverage our existing program delivery platforms.