Evidence Action Beta is our in-house incubator of promising, evidence-based innovations. Beta exists to design, prototype, test and deliver measurably impactful, cost-effective programs that are ready for scaled implementation by Evidence Action and our partners.

What Is Beta Working on

NO LEAN SEASON: A ticket out of Seasonal Poverty

Update: Evidence Action is terminating No Lean Season. We share more about this decision on this blogpost.

No Lean Season aims to reduce the negative effects of seasonality on the poorest in rural agricultural areas by enabling labor mobility that increases incomes.  It is a new program that we are testing in Evidence Action Beta’s portfolio. 

We give a travel subsidy of $19 to very poor rural laborers so they can send someone to a nearby city to find a job during the period between planting and harvesting. This is the time in rural areas when there are no jobs, no income, and when families miss meals.

With a temporary job during this ‘lean season,’ households are are able to put an additional meal on the table for every member of the family each and every day. That’s 500 additional meals during the lean season. 

We are investigating several critical questions to pressure test the hypothesis that alternative strategies to food aid may be effective and cost effective means of providing seasonal income support in Bangladesh and elsewhere. 


Winning Start: A Level Learning Field For All Kids

Winning Start is an Evidence Action Beta project designed to improve literacy and numeracy among primary school students. Youth volunteers target struggling learners with interactive educational activities that complement classroom learning and are tailored to students’ skill level.

Multiple randomised controlled trials conducted in IndiaGhana, and Kenya have consistently shown that using lightly trained volunteers to deliver remedial sessions to students improves learning outcomes. 

With Winning Start, volunteers also benefit. By facilitating remedial sessions and engaging in community projects, youth volunteers receive a platform to develop the professional development skills that will support their transition into the workforce. Governments win too: Winning Start can be adapted to serve different aspects of countries’ national agenda. It gives Governments the opportunity to invest in youth, while pursuing multiple other potential public gains. 

In 2014, Evidence Action partnered with the Government of Kenya to pioneer the Winning Start model through G-United – a national youth volunteering program implemented by the Government of Kenya. Now in it’s third year of operation, G-United continues grow — offering exciting learning opportunities as it does. 




Our Approach

Our approach to translating rigorous evidence to impact at scale includes four key phases:


We identify rigorously-vetted ideas with potential for impact, mechanisms that may be needed to make them work in the real world and at scale, enabling environments where particular ideas are most likely to succeed, and potential implementation partners to execute ideas. We have designed several tools that we use to do this, including:

  • Evidence scans through which we survey the existing literature on a particular theme. Evidence scans are triggered by strategic and partner interest.
  • Context scans which are geared towards helping us understand the socio-political and economic dynamics of a prospective implementation area. Context scans are developed through desk-research and interviews with development researchers and practitioners who located in or familiar with the area in question
  • call for results which is issued as an open invitation for researchers to submit promising studies and
  • Interviews with our network of experts

Our sourcing activities typically yield a plethora of ideas and prospective opportunities. Since we cannot pursue all these leads at once, we maintain a knowledge bank: a repository of all the ideas and prospects that emerge from our evidence and context scanning, and our conversations with researchers. We expect the information in our knowledge bank to accrue interest over time—as more research is generated and as contexts change, for instance—and become more applicable in future. Our knowledge bank keeps us ready to quickly capitalize on future opportunities.

Ultimately, the goal of our sourcing activities is not to find ‘perfect’ ideas over which no outstanding questions remain and no potential implementation challenges exist. Instead, information gained about specific ideas informs not just the decision about whether to progress the idea through the pipeline, but the design of potential prototypes through which we can test promising ideas. 


Once we identify what works in principle, we consider how to make it work in practice. Through a deep-dive into the literature supporting an idea, conversations with the authors of this literature, our own “design-thinking” research, and small scale testing of prospective elements—such as messages, behavioural nudges, and hardware/tools—that may go into a program, we begin to conceptualize a prototype. We bring together the most promising elements and element-combinations to design a viable model for implementing evidence based ideas. Prototypes are therefore not exact replicas of research studies. They are designed to remain faithful to the essential elements of the original study and theory of change, but take into consideration complementary evidence and practical implementation constraints.

During our prototyping phase we also assess the scale potential of an idea, develop an initial cost-effectiveness model and build a learning agenda that details the outstanding questions we have which will need to be investigated through testing.


During this phase, we apply the conceptualized implementation model from phase two as a working, testable prototype— a project that reaches several thousand users. We review the operational model and:

  • Assess its scalability
  • Build and roll out technology components that increase efficiencies and reduce costs;
  • Build a viable monitoring framework
  • Iterate on different elements of the model, refining those that work and eliminating those that don’t; and
  • Prepare a standardized implementation toolkit to allow us/or our partners to implement the project at scale.

We also develop the necessary political relationships and alliances with local partners, such as governments and implementing partners.


Testing at scale: Once a functional and scalable implementation model is in place, we expand the project to reach tens of thousands of users and rigorously evaluate it for impact at-scale. One of our goals is to answer questions that can only be understood by evaluating an intervention at-scale—such as how the project might influence market dynamics—in order to gain a better grasp of the project’s social impact and cost-effectiveness. During this final phase of program development, we also secure the necessary financing, ideally over several years, to grow quickly and across locations. In parallel, we recruit the right talent to support the rollout and growth of projects that prove impactful. Finally, we may adapt and apply the project to new geographies and begin testing it in those new contexts.

As ideas evolve and move through these phases, it is our full expectation that some of them will fail. We learn from those that do. Meanwhile, innovations that demonstrate consistent impact and prove scalable and cost-effective are scaled-up to improve the lives of millions of people. 

The Beta Pipeline

The Beta Pipeline

Evidence Action maintains a pipeline of promising innovations, all of which are at different phases of our  testing process. 

At each phase of exploration, these interventions are examined along four key criteria to determine whether they should advance through the pipeline or be exited. We apply these four criteria throughout our program development process, giving greater or lesser emphasis to certain criteria depending on what is appropriate at each phase of innovation and testing. Our filtering criteria include:

  • Impact: We look for ideas that are backed by research. As we use these ideas to build testable prototypes, we also consider impact more comprehensively: does the initial impact registered in research hold in the real world? In a new context? Over time? When implemented at scale: how does an idea impact economic markets (general equilibrium effects) or third parties (externalities) measure against its impact on beneficiaries?

  • Cost-effectiveness: We strive to find solutions that have the maximum impact for every dollar spent. Cost-effective programs have a high value for money and are better than alternative solutions at achieving a measurable outcome. We use cost-effectiveness analysis to provide insight on the relative costs and effects of different interventions, which ultimately inform potential program impact. We also use cost-effectiveness analysis to provide general comparability of alternative interventions, and help inform priorities for resource allocation.

  • Scalability: We only invest in delivering services that have the potential to reach millions of people. Interventions that are, intrinsically, impossible to scale, or that do meet a real need faced by millions of people, do not make the cut. If, for any reason, we find we (or other implementing partners) cannot deliver a particular intervention at scale, we exit it from our pipeline.

  • Strategic fit: Evidence Action recognizes that some projects are better suited for our organization than others. We account for our strengths and limitations when deciding whether to pursue an idea, and we prioritize projects that align with our organizational strategy and infrastructural capacity, including those that have the potential to maximize the impact of our existing programs, or which leverage our existing program delivery platforms.

Leveraging Our Existing Government Partnership in India to Tackle Anemia

For several years, we’ve partnered with the Government of India to deliver mass school-based deworming as part of our Deworm the World Initiative. The ongoing success of this partnership has allowed us to explore opportunities to extend our impact in India. Ultimately, we settled on one promising area for further exploration through our Beta incubator: India’s national Weekly Iron and Folic Acid Supplementation (WIFS) program, which is designed to address the pressing challenge of anemia among school-age children.

Living in the Service of Promise: Reflections from Winning Start Volunteers

Winning Start, an education program in our Beta incubator, is designed to improve child literacy and numeracy by using youth volunteers to deliver the rigorously tested and proven “teaching at the right level” (TaRL) pedagogy. As the world celebrates International Volunteer Day, we celebrate Winning Start volunteers – who spend up to a year working to unlock the promise of an upcoming generation. We interviewed five youth who successfully completed the Government of Kenya’s G-United program to learn more about their experiences and motivations.

Why (and When) We Test at Scale: No Lean Season and the Quest for Impact

No Lean Season, a late-stage program in the Beta incubation portfolio, provides small loans to poor, rural households for seasonal labor migration. Based on multiple rounds of rigorous research showing positive effects on migration and household consumption and income, the program was delivered and tested at scale for the first time in 2017. Results showed that the 2017 program did not have the desired impact on inducing migration, and consequently did not increase income and consumption. In this post, we dive deep into these results and explain how they are shaping the path forward for No Lean Season.

Recruiting youth volunteers in Africa to improve child literacy: Lessons from Winning Start

Last month, our team attended the inaugural Teaching at the Right Level conference in South Africa, hosted by pioneers in the field, Pratham and J-PAL. On a panel with organizations piloting variations of youth or volunteer-led TaRL models across Africa, our Program Coordinator, Fred Abungu, shared what we’ve learned from working with the Government of Kenya to effectively and sustainably recruit, retain, and motivate volunteers to deliver remedial support at steadily increasing scale. In this post, we explore some of the insights he offered.

An evaluation of a relative-risk HIV awareness campaign generated mixed results …here’s what we learned from it.

A 2005 randomized controlled trial conducted in Kenya found that girls who were told about the dangers of sugar daddies were 28% less likely to be pregnant at year-end than girls who were simply told to abstain, and girls who received no sexual education beyond that offered in school. Based on this success, Young 1ove worked with a group of partners, including the Government of Botswana, the Abdul Latif Jameel Poverty Action Lab (J-PAL), Botswana-Baylor Children’s Clinical Centre of Excellence, and Evidence Action, to evaluate the idea again through a similar program, No Sugar. This second round of evaluation delivered mixed results and all partners involved in the program made a decision not to scale the No Sugar intervention. Here are our three biggest takeaways from the experience.

Ambiguous results and clear decision-making: a sugar-daddy awareness program evaluated in Botswana will not be scaled up

What happens when you tell middle-school and teenage girls in Africa about the dangers of sexually engaging with older men who offer them financial favors? Does it affect their choice of sexual partner? A Kenya-based, 2005 randomized controlled trial suggested it might. In 2014, Botswana-based non-profit Young 1ove brought together a group of partners to re-evaluate the idea through a program, “No Sugar,” designed to be scaled-up across Southern Africa. The evaluation yielded mixed results; consequently, the Government of Botswana, Young 1ove, and other partners are not scaling up No Sugar as it was originally designed. Instead, Young 1ove is redesigning the program for further evaluation of impact, before potentially scaling it up in future.

How we’re learning from the pioneers of “Teaching at the Right Level”

In late 2017, we had a fantastic opportunity to participate in a ‘Teaching at the Right Level’ (TaRL) workshop in India hosted by Indian education non-profit and pioneer of the TaRL model, Pratham. The crux of the workshop was our favorite theme here at Evidence Action: how to translate rigorous research into effective, scaled action.