The question of whether interventions that are supposed to prevent and counter violent extremism (P/CVE) are actually effective is as pressing as it is often difficult to answer. Evaluating P/CVE interventions is a challenging task, compounded by complex causal chains, dynamic contexts and what is often called the ‘attribution problem’: How, among the many factors that influence an individual’s turn to or away from violent extremist ideas and actions should one isolate the effect of a single P/CVE intervention? (Koehler 2017, 95; Khalil and Zeuthen 2016, 31–32; Vadher 2025, 14).
An evaluation method that is already gaining traction in other complex fields, such as peacebuilding and development cooperation (Beardmore et al. 2023; Tomei et al. 2024; Carbone et al. 2025) is outcome harvesting. However, in recent years, this method has been discussed as a valuable tool within the P/CVE community, too. The USAID CVE Reference Guide implicitly suggests the method’s utility for the P/CVE field by including a detailed briefing on the method from the Ford Foundation’s MENA office (Wilson-Grau and Britt 2013). Moreover the toolkit for P/CVE evaluations by Holdaway and Simpson (2018, 99) explicitly recommends outcome harvesting as an evaluation tool and in an international survey on evaluation in P/CVE, one respondent from the Côte d’Ivoire pointed to outcome harvesting as a particularly innovative evaluation method (Bressan et al. 2024, 33). This blog post picks up on these pointers by describing the method briefly and discussing its application to the P/CVE field.
Why evaluating P/CVE is so challenging
Evaluation works best when the evaluated intervention has a clear linear causal logic: A certain input leads to an activity, which produces a predictable outcome. Such a sequence in combination with only few other factors that could produce the outcome other than the intervention lends itself easily to an assessment of whether the intervention was ‘effective’. An evaluative method of choice in this ideal scenario may be a randomised control trial, where the effect of the intervention on one group is compared against the effect of the intervention on a control group. However, the realities of P/CVE are quite different. Since radicalisation is a highly individualised process, interventions need to be tailored and their effects often combine with other key life events, producing complex causality. Moreover, the effects may manifest far beyond the intermediate project scope. All this, in combination with the ethical challenges involved in withholding a P/CVE intervention for a control group, rules out the use of randomised control trials, which are the ‘gold standard’ in evaluation methods. It is beyond the scope to describe these challenges in detail and address each of them with potential solutions (see also the Vortex blogpost ‘On the challenges of evaluating efforts to prevent a causally complex phenomenon such as violent extremism’). Instead, this blog post concentrates on one method as a potential way forward, namely outcome harvesting. After all, what Wilson-Gau and Britt (2013, 2-3) describe here as the setting that makes this method attractive sounds very much like the challenges faced by evaluations of P/CVE interventions: ‘In complex environments (…) objectives and the paths to achieve them are largely unpredictable and predefined objectives and theories of change must be modified over time to respond to changes in the context’.
What is outcome harvesting?
Outcome harvesting is a participatory evaluation approach where instead of assessing whether activities lead to specified outcomes by pre-established indicators, an upside down logic is at play: The method starts by asking ‘What has changed?’ and only then poses the question: ‘How did the programme contribute to this change?’. Wilson-Gau and Britt (2013, 1) who were also involved in developing this method, define it as follows:
Unlike some evaluation methods, Outcome Harvesting does not measure progress towards predetermined outcomes or objectives, but rather collects evidence of what has been achieved, and works backward to determine whether and how the project or intervention contributed to the change.
Hence, it requires agreement on what the relevant outcome of interest of the specific P/CVE intervention is followed by a collection of evidence on how the intervention contributed to the identified outcome. According to Wilson-Gau and Britt (2013, 1) such evidence can be reported observations, direct critical observation, or direct or simple induced inference (ibid., 7). After such evidence has been collected, outcome harvesting involves a winnowing process where information is ‘validated or substantiate by comparing it to information collected from knowledgeable, independent sources’ (ibid., 1) that are ‘knowledgeable about the outcome(s) and how they were achieved’ (ibid., 5). Finally, this information can be interpreted and a nuanced and reliable statement about effectiveness be made – one that focuses on contribution rather than attribution.
The value of outcome harvesting for evaluating P/CVE interventions
Outcome harvesting does not solve every challenge for evaluating P/CVE yet offers significant advantages over traditional evaluation methods in complex P/CVE contexts. The challenge that P/CVE interventions aim at prevention and thus involve non-events as outcomes is met by shifting the focus to observable proxy outcomes, such as positive behavioural changes. While this is not new and something that is recommended for P/CVE evaluations in general (Holdaway and Simpson 2018, 70; Helmus et al. 2017, 59), outcome harvesting particularly offers a way to deal with the non-linear causality underlying P/CVE interventions. By working backwards from outcomes to activities, it forces the evaluator to map the causal pathway post-hoc, rather than assuming a linear path defined in advance. This also allows to identify emergent and unintended outcomes, which is particularly important in delicate P/CVE interventions.
Since outcome harvesting by design allows for highly individualised outcomes there is no need for agreement on a common set of indicators for success, ‘making it a good tool to address the cloudiness typically associated with P/CVE evaluative exercises’ (Ris and Ernstorfer 2017, 21–22). While this acknowledges that ‘what indicates success for one might (…) not be the same for another’ (Raets 2022), it also makes comparison more difficult and requires more resources. In general, it is a notable limitation of outcome harvesting that it requires extensive and time-consuming verification – something that is potentially unattractive in a field faced by frequent resource constraints. Lastly, the ‘attribution problem’, the difficulty of proving the P/CVE intervention caused a certain outcome is addressed by focusing on credible contribution instead. Outcome harvesting systematically asks for and verifies the plausibility of the intervention’s contribution, moving from absolute attribution to verifiable contribution. This verification, however, requires, multiple credible sources both in the evidence-collection and in the winnowing process that corroborate both the outcome and that validate the interventions’ contribution. This can be a particular weakness of the method, since there might not be enough stakeholders to include without the risk of selection bias.
Finally, combining outcome harvesting with a theory of change can be fruitful, since such a theory tells the harvesters what kind of changes are relevant to look at. After outcome harvesting has identified what did happen, these outcomes can be compared to the assumptions stated in the initial theory of change, highlighting successes, surprises and flawed assumptions. However, vice-versa using the reversed logic of outcome harvesting can also help in building a theory of change: ‘By asking what needs to happen for these effects to occur and what resources are necessary to achieve them, the focus shifts to the effects themselves, while at the same time achieving flexibility in terms of measures and resources’ (Klemm and Strobl 2024, 8–9; cite Strobl and Lobermeier 2021, 72).
In conclusion, while outcome harvesting is not a silver bullet for evaluating P/CVE, it better aligns with the realities of this field than many traditional evaluation approaches. By reversing the usual evaluation logic, it helps evaluators identify credible contributions in complex causal chains to outcomes. Its participatory nature and focus on observable change make it particularly suited to contexts where success is often clouded. However, it demands time and knowledgeable stakeholders that provide evidence and corroborate causal connections, which might not be feasible in every intervention context. Working alongside a theory of change can focus the evaluation exercise but outcome harvesting can also be used to produce one or correct an existing one. Hence, for evaluators seeking to capture the subtle effects of P/CVE interventions, outcome harvesting may not answer every question, but it certainly does stand out as an innovative tool.
Sources
Beardmore, Amy, Matthew Jones, and Joanne Seal. 2023. ‘Outcome Harvesting as a Methodology for the Retrospective Evaluation of Small-Scale Community Development Interventions’. Evaluation and Program Planning 97 (April): 102235. https://doi.org/10.1016/j.evalprogplan.2023.102235.
Bressan, Sarah, Sophie Ebbecke, and Lotta Rahlf. 2024. How Do We Know What Works in Preventing Violent Extremism? Evidence and Trends in Evaluation from 14 Countries. With Angela Herz and Anna Heckhausen. GPPi; PrEval (PRIF). https://gppi.net/assets/BressanEbbeckeRahlf_How-Do-We-Know-What-Works-in-Preventing-Violent-Extremism_2024_final.pdf.
Carbone, Nicole B., Nathalie Alberto, Kate Henderson, et al. 2025. ‘Use of Outcome Harvesting to Understand the Outcomes of a COVID-19 Pandemic Leadership and Management Program in Six Countries’. Evaluation and Program Planning 111 (August): 102619. https://doi.org/10.1016/j.evalprogplan.2025.102619.
Helmus, Todd C., Miriam Matthews, Rajeev Ramchand, et al. 2017. RAND Program Evaluation Toolkit for Countering Violent Extremism. RAND Corporation. https://www.cvereferenceguide.org/sites/default/files/resources/RAND_CVE%20EVAL%20toolkit.pdf.
Holdaway, Lucy, and Ruth Simpson. 2018. Improving the Impact of Preventing Violent Extremism Programming: A Toolkit for Design, Monitoring and Evaluation. United Nations Development Programme. https://reliefweb.int/report/world/improving-impact-preventing-violent-extremism-programming?gclid=EAIaIQobChMIyOHv38uE_wIVg9Z3Ch1f2AD8EAAYASAAEgJlu_D_BwE.
Khalil, James, and Martine Zeuthen. 2016. Countering Violent Extremism and Risk Reduction: A Guide to Programme Design and Evaluation. Whitehall Report 2-16. Royal United Services Institute. https://static.rusi.org/20160608_cve_and_rr.combined.online4.pdf.
Klemm, Jana, and Rainer Strobl. 2024. Wirkungsmodelle Und Ihr Potenzial Für Evaluation Und Qualitätssicherung in Der Demokratieförderung. PrEval Expertise. PrEval Consortium. https://preval.hsfk.de/fileadmin/PrEval/PrEval_Expertise_01_2024.pdf.
Koehler, Daniel. 2017. ‘Preventing Violent Radicalisation: Programme Design and Evaluation’. In Resilient Cities. Countering Violent Extremism at Local Level, edited by Diego Muro. Barcelona Centre for International Affairs. https://www.cidob.org/en/articulos/monografias/resilient_cities/preventing_violent_radicalisation_programme_design_and_evaluation.
Raets, Sigrid. 2022. ‘Trial and Terror. Countering Violent Extremism and Promoting Disengagement in Belgium’. Journal for Deradicalization Spring 2022 (30): 223–61.
Ris, Lillie, and Anita Ernstorfer. 2017. Borrowing a Wheel: Applying Existing Design, Monitoring and Evaluation Strategies to Emerging Programming Approaches to Prevent and Counter Violent Extremism. Peacebuilding Evaluation Consortium. https://www.cvereferenceguide.org/sites/default/files/resources/Applying-Existing-DME-Strategies-to-Emerging-PCVE-Approaches.pdf.
Strobl, Rainer, and Olaf Lobermeier. 2021. ‘Wirkungen Im Zentrum’. In Evaluation von Programmen Und Projekten Der Demokratieförderung, Vielfaltgestaltung Und Extremismusprävention, edited by Björn Milbradt, Frank Greuel, Stefanie Reiter, and Eva Zimmermann. Beltz Juventa.
Tomei, Gabriele, Linda Terenzi, and Enrico Testi. 2024. ‘Using Outcome Harvesting to Evaluate Socio-Economic Development and Social Innovation Generated by Social Enterprises in Complex Areas. The Case of BADAEL Project in Lebanon’. Evaluation and Program Planning 106 (October): 102475. https://doi.org/10.1016/j.evalprogplan.2024.102475.
Vadher, Kiren. 2025. Evaluating in Complex Policy Environments: A Practitioner’s Perspective. Crest Security Review. Crest (Centre for Research and Evidence on Security Threats).
Wilson-Grau, Ricardo, and Heather Britt. 2013. Outcome Harvesting. Ford Foundation MENA Office. https://www.cvereferenceguide.org/sites/default/files/resources/wilsongrau_en_Outome%20Harvesting%20Brief_revised%20Nov%202013.pdf.

