Prevention is a concept aimed at thwarting events that would have otherwise occurred. It serves as a proactive measure, rooted in assumptions ranging from mere speculation to substantiated assertions, regarding the likelihood of an undesirable event. Such an event can be, for example, an act of violent extremism, as the frequently used abbreviation ‘PVE’ for ‘preventing violent extremism’ shows. However, when such notions translate into practical interventions, they can encounter legitimacy issues, particularly due to the potential for adverse outcomes, including feelings of stigmatization.
Perhaps this is why some scholars prefer the term ‘countering violent extremism’, abbreviated as ‘CVE’, as it inherently acknowledges the presence of a threat that requires proactive measures to mitigate. However, even within this framework, it is challenging to completely divorce oneself from the preventive notion. For instance, one definition of CVE describes it as ‘a broad spectrum of non-coercive and preventative initiatives aimed at addressing the root causes of violent extremism in specific locations where programmes are implemented’ (Khalil and Zeuthen 2016, 4).
‘P/CVE’, preventing and countering violent extremism, lastly, is a term frequently used to deliberately combine both the preventive and the combative aspect. Some definitions of P/CVE even distinguish between prevention and intervention, understanding it as a spectrum of non-coercive individual upstream preventative or interventional actions applied at the individual, relational, group or societal level, challenging various push and pull factors of recruitment and mobilisation for violent extremism (Pistone et al. 2019, 3; Zeiger and Aly 2015, 1).
These conceptual disagreements point to conceptual uncertainties regarding what preventive measures in the field of extremism prevention entail and where the boundaries of this concept lie. A variety of measures can fall under all three of these banners – even under multiple. Gielen (2019, 5), for example, writes regarding CVE, that it represents a ‘catchphrase for a policy spectrum varying from early prevention and safeguarding measures for society, groups, and communities to very targeted measures for violent extremists such as de-radicalisation and disengagement programmes’. To avoid measures being labelled as PVE, CVE, or P/CVE, while actually being stigmatising or encroaching excessively on individual liberties, it is imperative to assess whether they effectively achieve their intended objectives. Through scientific scrutiny, evaluation determines whether a measure is efficacious, efficient, and relevant, thereby enabling critical appraisal of its justification or the need for modification or abolition due to inefficiency or unjustified adverse effects. However, evaluation is fraught with challenges, primarily due to the highly complex and context-dependent nature of the phenomena being addressed.
Challenges in evaluating measures to prevent and/or counter violent extremism
Every journey in and out of violent extremism is a very individual one. The terms ‘key drivers of violent extremism’ or ‘push and pull factors of recruitment and mobilisation’, as mentioned in the earlier definitions, represent merely aggregated factors. What holds true for one individual may not resonate with another. In addition, there is usually not just one mechanism at play, but a complex and individual interplay of mechanisms. While one intervention may prove effective for person A only when paired with another, person B might require an entirely different approach to achieve similar outcomes. Crucially, the determinants of these journeys may extend beyond the realm of P/CVE efforts and could even be tied to contextual factors only.
The inherent complexity of causality begs the question: is it feasible to define what a ‘successful’ intervention is? One may be inclined to take ‘non-radicalisation’ of clients as an objective of a measure aimed to halt ongoing radicalisation. Yet, it remains unclear how this may individually manifest. How does one quantify the absence of an event and accommodate the deeply individualised pathways that lead both to the event and its absence (Baruch et al.2018, 478)? Moreover, it must be acknowledged that interventions may only yield the desired impact when employed in intricate combinations due to the multifaceted nature of the underlying mechanisms.
The causal complexity surrounding the effectiveness of measures to prevent and counter violent extremism and making evaluation challenging hence stems from the inability to isolate their effects (Bjørgo 2016, 245), and the profoundly individualistic pathways to ‘success’, a term subject to diverse interpretations. Moreover, two additional factors further compound this complexity. Firstly, these individualised processes are often largely invisible, given that extremism exhibits both latent and manifest characteristics (Hirschi and Widmer 2012, 172). Secondly, certain effects may only manifest in the long term, contrasting with the typically short-term nature of evaluations conducted shortly after project completion. As highlighted by Mattei and Zeiger (2018, 3), ‘outcomes develop over long time and the effects are not seen immediately or within a program management cycle’. This temporal gap exacerbates the complexity of the interconnected factors: the longer the duration before an alleged effect of a prevention mechanism becomes evident or measurable, the greater the likelihood that it has arisen from the contribution of numerous other factors, or possibly solely from these.
Another problem is of a methodological nature. In evaluation research, experimental evaluation methods are considered the gold standard for determining causalities. In such methods, an intervention is administered to one group while withheld from a control group, allowing for the measurement of the causal effect of the intervention while controlling for other factors. However, applying this approach in the P/CVE field is ethically untenable, given the sensitive nature of the interventions. Additionally, creating laboratory-like conditions is impractical in real-world settings: ‘Most research on CVE is performed in real-world context on real-world subjects who are not amenable to random assignment into groups’ (Braddock 2020, 11). Quasi-experimental methods, such as before-and-after testing of the same group, offer potential alternatives. However, both experimental and quasi-experimental methods fall short in capturing causal complexity. To address this complexity, methods such as qualitative comparative analysis or process tracing are required. Surprisingly, these methods have not yet been extensively employed in evaluating P/CVE measures.
Lastly, even if considerable thought and determination has gone into evaluating a P/CVE project, the question of replicability arises. As noted by Tore Bjørgo (2016, 241), ‘it is not as simple as saying that if a measure has produced good effects somewhere else then it will also work here’. This makes it difficult, for example, to use the results of individual evaluations as proof of the universal effectiveness of a measure: In a different context, things may be different again.
Ways forward
In a recent policy brief, Amy Gielen and Aileen van Leeuwen (2023, 2) provide guidance on addressing such dilemmas, highlighting that a ‘web of assumptions often obstructs the full integration and utilisation of monitoring and evaluation’. According to them, it could help to formulate a ‘Theory of Change’ at the beginning of an evaluation, i.e. ‘to map out the logical pathways through which interventions are expected to lead to positive results’ (Ibid., 4) and to formulate so-called SMART indicators that are Specific, Measurable, Achievable, Relevant, and Time-bound (Gielen 2020). Similarly, Hirschi and Widmer (2012, 177) suggest resolving ambiguity regarding terminology by establishing central definitions in collaboration with stakeholders at the outset of an evaluation.
To address the challenge of determining isolated causal factors, Gielen and van Leeuwen (2023) advocate for adopting a ‘contributory understanding for assessing an intervention’s impact’. This approach focuses not on determining whether an intervention has caused an outcome, but rather on assessing the extent and manner of its contribution to the outcome. Methodological tools such as contribution analysis, outcome harvesting, most-significant change analysis, or process tracing (cf. Holmer, Sutherland, and Wallner 2023) are recommended for this purpose. Additionally, the authors suggest leveraging existing indicator databases (e.g., the UNDP’s PVE Indicator Bank) or measurement tools (cf. Barrelle 2015) to derive inspiration for developing indicators. Another solution to address the causal complexity of mechanisms for preventing and countering violent extremism could be the analytical method of qualitative comparative analysis (QCA). Since this discussion would exceed the scope of this blog post, there will be another blog post dedicated to this topic.
The strategies proposed by Gielen and van Leeuwen (2023) and all other possible approaches to dealing with the causal complexity inherent in measures for the prevention and combatting of violent extremism, which make their evaluation so challenging, are no panacea. However, similar to numerous other guidebooks and toolkits developed on this subject in recent years (e.g., Holmer, Sutherland, and Wallner 2023; INDEED 2023), they serve a crucial purpose: mitigating the hesitancy of practitioners implementing such measures and policymakers funding them to subject them to evaluation. The absence of monitoring or evaluation, or the failure to utilise their results under the pretext of impossibility, undermines the legitimacy of these measures. Failing to undertake evaluation leaves the door wide open for P/CVE measures that may not achieve their intended goals and could even result in harmful consequences. Thus, to enhance the quality of prevention practices and effectively delineate the boundaries of measures falling under the realms of PVE, CVE, and P/CVE, it is imperative to confront the inherent causal complexity of these interventions.
Sources
Barrelle, Kate. 2015. ‘Pro-Integration: Disengagement from and Life after Extremism’. Behavioral Sciences of Terrorism and Political Aggression 7 (2): 129–42. https://doi.org/10.1080/19434472.2014.988165.
Baruch, Ben, Tom Ling, Rich Warnes, and Joanna Hofman. 2018. ‘Evaluation in an Emerging Field: Developing a Measurement Framework for the Field of Counter-Violent-Extremism’. Evaluation 24 (4): 475–95. https://doi.org/10.1177/1356389018803218.
Bjørgo, Tore. 2016. Preventing Crime. A Holistic Approach. New York: Palgrave Macmillan.
Braddock, Kurt. 2020. ‘Experimentation and Quasi-Experimentation in Countering Violent Extremism: Directions of Future Inquiry’. Researching Violent Extremism Series. Resolve Network.
Gielen, Amy-Jane. 2019. ‘Countering Violent Extremism: A Realist Review for Assessing What Works, for Whom, in What Circumstances, and How?’ Terrorism and Political Violence 31 (6): 1149–67. https://doi.org/10.1080/09546553.2017.1313736.
———. 2020. Cutting Trhough Complexity: Evaluating Countering Violent Extremism (CVE). Amsterdam: Amsterdam University Press.
Gielen, Amy-Jane, and Aileen van Leeuwen. 2023. ‘Debunking Prevailing Assumptions About Monitoring and Evaluation for P/CVE Programmes and Policies’. ICCT Policy Brief. International Centre for Counter-Terrorism. https://doi.org/10.19165/2023.2.08.
Hirschi, Christian, and Thomas Widmer. 2012. ‘Approaches and Challenges in Evaluating Measures Taken against Right-Wing Extremism’. Evaluation and Program Planning 35 (1): 171–79. https://doi.org/10.1016/j.evalprogplan.2010.11.003.
Holmer, Georgia, Ann Sutherland, and Claudia Wallner. 2023. ‘Compendium of Good Practices. Measuring Results in Counter-Terrorism and Preventing and Countering Violent Extremism’. European Commission, FPI, DG INTPA; UNOCT, UNODC, Hedayah, GCERF. https://www.un.org/counterterrorism/sites/www.un.org.counterterrorism/files/eu_un_compendium_good_practice_web.pdf.
INDEED. 2023. ‘About INDEED’. 2023. https://www.indeedproject.eu/.
Khalil, James, and Martine Zeuthen. 2016. ‘Countering Violent Extremism and Risk Reduction: A Guide to Programme Design and Evaluation’. Whitehall Report 2-16. London: Royal United Services Institute. https://static.rusi.org/20160608_cve_and_rr.combined.online4.pdf.
Mattei, Cristina, and Sara Zeiger. 2018. ‘Evaluate Your CVE Results. Projecting Your Impact’. Hedayah.
Pistone, Isabella, Erik Eriksson, Ulrika Beckman, Christer Mattson, and Morten Sager. 2019. ‘A Scoping Review of Interventions for Preventing and Countering Violent Extremism: Current Status and Implications for Future Research’. Journal for Deradicalization 19: 1–84. Zeiger, Sara, and Anne Aly. 2015. Countering Violent Extremism: Developing an Evidence-Base for Policy and Practice. Perth: Curtin University. https://nwc.ndu.edu/Portals/71/Images/Publications/Family_Counselling_De_radicalization_and.pdf?ver=POmMmM26j53jLqNg1o71wA%3D%3D