How do different countries assess whether their measures to prevent and counter violent extremism (P/CVE) are effective? This is what a group of researchers at the Global Public Policy Institute in Berlin and the Peace Research Institute Frankfurt are investigating as part of the PrEval project. Building on her previous involvement in the team ‘International Monitoring’, VORTEX doctoral candidate Lotta Rahlf is now working on a European comparative study of P/CVE evaluation systems. Here, she interviews her former colleagues Sophie Ebbecke, Sarah Bressan and Angela Herz, who share some initial results of an international comparative study on P/CVE evaluation practices in 14 countries across the globe.
Time and again, there are concerns that so few P/CVE efforts are evaluated. You have now carried out a very recent and extensive survey – is this still the case today?
Unfortunately, there is still room for improvement. In some countries, many P/CVE efforts remain insufficiently evaluated or are not evaluated at all, but evaluation practice is increasingly professionalised in others. This is a welcoming development but makes it all the more important to engage in a cross-national dialogue for sharing experiences and building capacities. Many formats are currently being created that promote such exchanges, such as P/CVE-specific networks where practitioners and evaluators can exchange experiences. Our international comparative study also sheds some light on the somewhat murky field of evaluation by providing answers as to how other countries go about it.
Then why are there still difficulties in evaluating P/CVE in some countries?
There are numerous reasons for this, but the most common is that the structures for evaluation are not yet well developed in many countries or that methodological skills are still lacking. Sometimes, stakeholders have varying experiences with P/CVE evaluation and different ideas about measuring effectiveness and do not yet engage in adequate dialogue. In some countries, there is also a lack of fundamental awareness of the added value of evaluation and insufficient funding for it. Yet, all these issues are interrelated, to put it simply. Where there is little funding, there is often little motivation to evaluate, either because there is a lack of awareness of the added value or because the money is perhaps spent on implementing the project rather than on an evaluation. After all, if resources are scarce, the insights gained from an evaluation might be limited anyway.
The lack of evaluation skills to conduct high-quality and more frequent P/CVE evaluations is a problem that affects many countries. Some evaluation designs still cause great uncertainty, for example, experimental designs, which involve the ethical issues of withholding an intervention from a control group to examine the effectiveness of a P/CVE measure. Therefore, the reservations about such designs are large, while less problematic quasi-experimental designs, in which no randomisation of people into different groups takes place, are increasingly appreciated.
Sometimes, suitable evaluation structures and skills for evaluation are in place, yet few evaluations occur. This can then also be related to the planning of P/CVE efforts. If evaluation is not considered from the outset, not enough or not the right data will be collected to allow statements about the effectiveness of a measure.
Does evaluation contribute to improving P/CVE efforts?
Every evaluation leads to insights into the functioning or effectiveness of a P/CVE effort, which can contribute to its improvement. However, P/CVE evaluations are frequently associated with accountability – to the funder and/or public. Is the large amount of taxpayer money well spent? In many countries, we observe an interweaving of evaluation purposes: depending on whether learning or accountability is prioritised, the evaluation design differs in each case. Many of our experts stated that the desire to evaluate to justify the effectiveness, efficiency and sustainability of the resources provided prevails. In other cases, scientific interest in empirical evidence for the effectiveness of various measures takes centre stage. Ideally, an evaluation takes place in an environment with a strong learning culture in which the evaluation is allowed to critically examine the effectiveness of the measure without constant concerns about consequences regarding the P/CVE project’s future.
You also posed questions about inspiring practices regarding P/CVE evaluation. Can you identify some promising developments?
As mentioned earlier, we are seeing more openness to dealing with sophisticated evaluation designs, such as quasi-experimental ones. For example, pre- and post-designs are particularly popular in our field to estimate the effect of a measure. In addition, the complexity of settings in which P/CVE efforts take place and the challenges to evaluation that this entails are increasingly being researched. The possibility of evaluating so-called multi-agency settings in which civil society and security agency actors may also be involved is currently being explored. Evaluation research is, of course, also influenced by technical developments. The question increasingly arises as to how digital methods can facilitate evaluation or what possibilities and limitations the use of AI offers. There is still a lot of research to be done here.
Finally, would you like to briefly explain your project?
Our team ‘International Monitoring’ is part of PrEval, a German research and transfer project involving 15 partner institutions. PrEval seeks to develop evaluation and quality assurance in the fields of extremism prevention, democracy promotion, and civic education by researching this practice and developing formats that contribute to strengthening it. From the outset, the idea behind our project was that looking abroad can be inspirational for developing German evaluation practice. We sought to identify particularly promising and innovative approaches from which the German prevention and evaluation landscape could benefit by conducting comparative research into evaluation practices in other countries. We sent an online questionnaire to 37 experts from 14 countries in different regions of the world. For each country, we gathered insights from 2-4 experts about (among other things) the actors involved, the financing of evaluation, the methods used, which obstacles but also innovations exist and how evaluation results are dealt with. To contextualise this, we also asked what measures to prevent extremism exist in each country and let our experts assess extremist threats and trends in their countries. Additionally, we will conduct several issue-centred studies that allow us to delve deeper into some relevant topics, such as effective support structures for enhancing evaluation capacity.
When will we be able to read more about your research findings?
Our final report and case studies will be published as an English-language publication by the Global Public Policy Institute in 2024. All other publications from the PrEval project will also be available on the project website: https://preval.hsfk.de/en/
PrEval runs from October 2022 to 2025 and is funded by the Federal Ministry of the Interior and Community.