COVID-19 Evaluation Study Design
The COVID-19 Evaluation is a mixed-methods evaluation including a multi-level data collection approach.
The COVID-19 Evaluation is a mixed-methods evaluation including a multi-level data collection approach. At the field level, the evaluation team will collect qualitative data through one case study country for each BHA region: Office of Africa (OA); Asia, Latin America and the Caribbean (ALAC); and Middle East, North Africa and Europe (MENAE). At the award level, the evaluation will conduct an in-depth assessment of the data, documents, and partner perspectives for a sample of the total awards. For global level funding results, the evaluation will utilize the award indicator data and a semi-structured e-survey with the implementing partners (IPs). These methods will evaluate how new health, WASH, protection, and food security and livelihoods programming enabled the continuation of services and addressed critical needs or secondary impacts exacerbated by the pandemic.
The four main data sources to answer the evaluation questions are:
- Case Studies: Three in-depth country case studies representing each BHA region.
- Award Deep Dive: Analysis of a 20% sample of awards, including document review and key informant interviews conducted with IPs and BHA AORs.
- Award Data Analysis: Analysis of quantitative secondary data from BHA’s award monitoring system.
- IP E-Survey: Survey disseminated to IPs to provide perception-based primary, structured and unstructured data.
Click on the evaluation criteria below to see the 11 evaluation questions.
1. Effectiveness of Implementing Partner Performance
1.1 Funding objectives—To what extent did BHA-funded COVID-19 supplemental awards contribute to BHA Objectives, particularly as they relate to the continuity of ongoing humanitarian services? Why or why not?
1.2 Results—To what extent did IPs achieve the expected results articulated in their COVID-19 supplemental awards?
1.3 Approaches—What were the successes and challenges of the interventions implemented, both those in line with BHA Technical Guidance for the COVID-19 funding and other approaches?
2. Relevance of Assistance to Participants
2.1 Accountability to Affected Populations (AAP)—How and to what extent were members of affected communities, especially women, youth, older persons, and persons with disabilities, consulted and able to provide regular feedback, and how did this feedback influence programmatic decisions on the design, implementation, and monitoring of activities?
2.2 Responsive design—How were design and targeting decisions made by BHA and IPs, and what was their effect on relevance? (including resource allocation decisions by BHA and targeting of interventions by IPs)
2.3 Relevance over time—To what extent were BHA COVID-19 funding guidance and awards adaptive to evolving knowledge of the disease and global recommendations, as well as the priority needs of the affected populations over time?
3. Efficiency (Timeliness) of the Delivery of Assistance
3.1 Factors affecting timeliness—What characteristics and circumstances were associated with faster and more timely assistance or with slower/delayed assistance? (Consider internal and external--to BHA/IPs--factors and barriers, IP capacities, efficiency issues, global availability of supplies, policy constraints, etc.)
3.2 Duration and its effect—How did the six-month period influence activity design or implementation processes?
4. Coordination Capacity (Coherence) of Implementing Partners
4.1 Layering with other actors—To what extent did partners effectively coordinate their COVID-19 response activities with other actors operating in the same environment/ context for a coherent and layered approach to the pandemic (including local actors)? Why or why not?
4.2 Leveraging with other awards—To what extent were partners able to effectively coordinate and leverage activities between their COVID-19-specific awards and their non-COVID-19 (IDA, Title II or other) awards operating in the same locations within the same timeframes?
4.3 Coordination support—To what extent did the HCIM-supported coordination efforts or activities facilitate useful and meaningful coordination that led to reduction of duplication or complementary activity planning?