This report covers a study commissioned by DFID entitled ‘Broadening the Range of Designs and Methods for Impact Evaluations’.
Impact Evaluation (IE) aims to demonstrate that development programmes lead to development results, that the intervention as cause has an effect. Accountability for expenditure and development results is central to IE, but at the same time as policy makers often wish to replicate, generalise and scale up, they also need to accumulate lessons for the future. Explanatory analysis, by answering the ‘hows’ and ‘whys’ of programme effectiveness, is central to policy learning.
The study has concluded that most development interventions are ‘contributory causes’. They ‘work’ as part of a causal package in combination with other ‘helping factors’ such as stakeholder behaviour, related programmes and policies, institutional capacities, cultural factors or socio-economic trends. Designs and methods for IE need to be able to unpick these causal packages.
It is important to recognise that even when IE is inappropriate, enhancing results and impacts can still be addressed through evaluation strategies other than IE. Results monitoring can make major contributions to accountability driven evaluations. There will also be occasions, when real-time, operational, action-research oriented and formative evaluations can all make serious contributions to filling gaps in evidence and understanding.