Back To Events

Impact Evaluation: There is a Different Way to Do It

Lucy, four, playing with Joyce, 13, in a remote community in Malaita Province, the Solomon Islands
Photo credit: Conor Ashleigh / Save the Children
Date:
Time:
8:30am - 9:30am ET
Location:
Online
Organizer:
HAEC

Join the Humanitarian Assistance Evidence Cycle (HAEC) for this one-hour interactive webinar to launch our three-part Impact Insights webinar series. During this foundational webinar, the HAEC team will present on what impact evaluations can answer, relative to other evaluation types, and the importance of causal evidence in the humanitarian sector. 

Building off this event, in January 2024, HAEC will host an engaging webinar to outline key findings from our report, Navigating Constraints to Implementing Impact Evaluations in Humanitarian Settings, and detail successful strategies to overcome these challenges.

Register for the second webinar in the Impact Insights Series here

Finally, in February 2024, implementers from several HAEC-funded impact evaluations will provide real-world examples to navigating these constraints as they share their experiences, challenges, and lessons learned from conducting impact evaluations in humanitarian contexts.  

Watch the Recording  Download the Slides

 

Frequently Asked Questions from HAEC Events

Isn’t it unethical to conduct impact evaluations in humanitarian settings? 

We have spent the past several decades building institutions to protect project participants in human subject research. All impact evaluations are subject to ethical review to ensure this. We know how to manage this concern. 

Regarding withholding programming from a control group, there are several tried and tested ways to circumvent this challenge through research design choices: 

  • Randomized evaluations may be justified in settings where demand for services outweighs available resources. In these cases, randomly allocating who receives may be a fair and ethical approach for distributing aid.  
  • In situations where assistance has been targeted, there can be opportunities to leverage quasi-experimental design approaches that construct a counterfactual group by identifying similar beneficiaries that were marginally ineligible based on established needs assessments.  
  • A/B testing approaches do not require pure control groups, but rather each group is assigned a different variant of the program. This allows the comparison of different program modalities to assess which is most effective or cost-effective.
    • To learn more about the role of A|B Testing, check out the second video on our series debunking common impact evaluation myths here
What are we doing that needs validation?  

The humanitarian assistance sector is a space where evidence is sparse compared to other sectors. To put this in perspective, HAEC identified only 163 studies on improving food security in humanitarian settings. In agriculture, we found ten times as many – about 1,600 and in education 1,100. 

In particular, we are advocating for impact evaluations to answer specific operational research questions that implementers may have. For example, “If I add training alongside the provision of agricultural inputs, does this make my program more impactful?” or “If I disburse cash in a lump sum versus multiple times per month, does this make my program more cost-effective?”

With humanitarian needs expanding and funding struggling to keep pace, we think it is more important than ever to use evidence to inform how to optimize humanitarian programming. Impact evaluations allow us to understand causality to ensure we are using limited funds as efficiently and effectively as possible, to reach the most people and have the greatest impact. 

We want to reiterate that we are not suggesting that you need an impact evaluation for every program. In designing an evaluation, it is important to identify what question you want to answer and allow that to drive your decision around evaluation design. We also encourage you to look at our Evidence Gap Map to see where evidence already exists. Where the evidence doesn’t already exist, you are interested in testing out a new intervention or package of interventions, or you’re interested in a well-supported intervention in a new context, then an impact evaluation might be the right option for you.  

Implementers are so bandwidth-constrained. How do we take this into account?  

This is such an important consideration and one we hear time and time again. We all know how much work implementers have. We believe that a successful impact evaluation in the humanitarian space should minimize requests to the implementation team through M&E staff that play a research coordinating role. While this can be managed through a centralized research unit, that is not the only model to overcome this constraint. We encourage you to read our Navigating Constraints report to understand more about our approach to this constraint (page 10). However, to outline a few of HAEC’s approaches:  

HAEC has in-person capacity-strengthening trainings for implementers to train them to better coordinate and consumed evidence from impact evaluations. HAEC is working to develop this curriculum into a free, online training.  

HAEC has created a series of template evaluation survey tools and resources that implementers can download and adapt to save time in their evaluation process. You can find all of HAEC’s resources here.

What about impact evaluations in rapid-onset crises?

HAEC understands that the short timelines of humanitarian awards, including rapid start-ups, presents key challenges for conducting an impact evaluation. To reduce research preparation time, HAEC designed and implemented an expeditated Internal Review Board (IRB) process for the studies we fund to pilot a process that is effective and easily navigated. Furthermore, HAEC disseminates tools and templates to minimize research preparation time, such as consent forms, survey tools, and code for conducting sample calculations.  

Moreover, HAEC encourages funders to allow for funding mechanisms that allow for the pre-positioning of partnerships and research designs. These partnerships and research designs can be established ex ante and be set up to be immediately deployed once a crisis occurs. HAEC recently published a blog highlighting what this looks like in anticipation of hurricane season in the US, which you can check out here.

One example of this being done is a recent impact evaluation conducted by the World Food Programme on anticipatory action in Nepal.

What is the appropriate timeline for conducting an impact evaluation?  

There is a lot of flexibility as well! There’s a misconception that impact evaluations require multiple years. We’re demonstrating on HAEC that this does not have to be the case. All of our funded studies were launched and will be completed in about a year. The key driver to an impact evaluation timeline is the outcomes you want to measure and how long they take to manifest. If you are looking at intermediate outcomes, or even outputs (which may be of interest in A/B testing approaches), you could significantly reduce the timeline. 

In terms of when to start an impact evaluation, we recommend that the earlier the better! It’s easier to design an impact evaluation ahead of program implementation so that it can be better embedded within the program design and overcome operational constraints. There may also be cases where baseline data is required (i.e., if doing a quasi-experimental design and there is no administrative or targeting data available). 

How can we ensure that the work of local organizations are highlighted?

Evidence use. Everyone thinks about local organizations in terms of evidence generation, and this is important, but even more so they have much more to say and much more to contribute than their international colleagues around how to best use and apply the learnings from impact evaluations.

How do we make sure we are using the evidence from impact evaluations to inform our work? 

One of the first steps to ensuring we use evidence is making sure we share it broadly. To that end, HAEC’s Evidence Gap Map is a useful resource to outline the existing evidence base for food security programs. Further, HAEC has an Evaluation in Action series, where we work with implementers to publish a brief about their ongoing or recently completed impact evaluation. If you have an evaluation you’d like to feature in this series, you can submit the information here.

Additionally, the more targeted the research questions are to organization learning agendas or specific operational decisions, the more likely that the evidence will be used. This is why it’s always important to start with your question when deciding if an impact evaluation is the right tool to use. 

How do you identify and control for the many variables that could interfere with conducting an impact evaluation in a humanitarian setting?

Humanitarian contexts present unique research implementation challenges. We have a section about this in our Navigating Constraints report (page 12), but to outline a few of the solutions HAEC has identified to address this challenge: There are many innovations in data collection technology, such as phone, interacted voice response, or SMS surveys that researchers can conduct remotely. Teams must also plan for high attrition in these cases, using the same mitigation strategies we see in the development space, such as planning for larger sample sizes or devoting resources to participant tracking.  

There have been some critiques of experimental design. How is HAEC addressing these critiques? 

The major critiques around RCTs are around external validity and threats to internal validity may still arise, which we agree with! Although we’d argue that these critiques are not unique to RCTs but are challenges on lots of research methods. As far as internal validity, when it comes to establishing attribution, RCTs are the best option because internal validity issues are generally less common than in quasi-experimental designs which require more assumptions.  

As far as external validity challenges, we are advocating for the use of impact evaluations to answer more targeted operational research questions as we believe these are the most valuable for implementers. This is why it’s important to tie impact evaluations to specific questions / operational decisions (e.g. should I add this component to my program?) 

Can you provide examples of impact evaluations generating evidence around equality for different groups (especially vulnerable groups)? 

Yes! Our Evidence Gap Map highlights several studies that we identified that look at the effectiveness of humanitarian interventions around food security that are targeted to certain vulnerable groups. We identified a small number of studies that look at interventions targeting women, children, people with disabilities, refugees, and internally displaced people.