One concern about impact evaluation is whether a program has made a difference. Essentially, in trying to understand this, an evaluation seeks to establish causality—the causal link between the social program and the outcomes. However, attributing causality to social programs is difficult because of the inherent complexity and the many and varied factors at play.
Evaluators can choose from a number of theories and methods to help address causality. Measuring the counterfactual—the difference between actual outcomes and what would have occurred without the intervention—has been at the heart of traditional impact evaluations. While this has traditionally been measured using experimental and quasi-experimental methods, a counterfactual does not always need a comparison group and can be constructed qualitatively.
With these in mind, this article explores the usefulness of the concept of additionality, a mixed-methods framework developed by Buisseret et al. as a means of evaluative comparison of the counterfactual.