Categories
Blog

Digital Refuge: Understanding the Community Dynamics of Extremist Groups

The appeal of radical groups often lies in creating a strong sense of belonging and a possibility of identification with other group members (Ebner, 2017). Extremist movements often thrive on societal divisions and grievances to promote a narrative of cultural and racial superiority, offering simple solutions to complex issues and cultivating a sense of belonging among their members. Forming interpersonal relationships and being a part of a group are essential aspects of human life. However, it might be harder to experience a sense of belonging for those ostracized by their communities for not fulfilling their roles or for those who are socially marginalized based on their social identities. It can attract these individuals to seek social connection and acceptance elsewhere and a sense of belonging is often emphasized as a pull factor that can drive people to join extremist groups. When journalists ask former extremists how they joined jihadist groups or far-right groups, it usually only takes a few minutes before they mention that they were ‘in search of belonging’ or ‘looking for community’ (Amarasingam, 2024). 

Yet, the community aspects are often implied rather than explicitly analyzed in scientific research. One of the few studies that engage with the literature on ‘sense of community’ is the one conducted by Willem De Koster and Dick Houtman (2008). They found that Dutch right-wing extremists who experienced stigmatization in offline social life regarded the Dutch branch of the international Stormfront forum (the largest right-wing extremist internet forum in the Netherlands) as an ‘online refuge,’ where they could experience a sense of community. The authors point out that part of the value of extremist online communities is that it allows individuals to feel like they are part of a broader ‘embattled’ sub-group whose members are linked transnationally and undergo the same struggle. Additionally, Bowman-Grieve (2009) found that Stormfront members place themselves in vulnerable psychological and emotional positions as they recount how they found the far-right movement, openly discuss struggles in their own lives, and talk about how this online community has provided them with a safe space of support. 

Some researchers suggest that the online space serves as a platform for ‘identity experimentation,’ where individuals can freely express themselves behind the anonymity of a username. This allows them to say things they wouldn’t in public and adopt personas that differ significantly from their real-world identities, essentially putting on an act or wearing a mask that hides their true selves. However, for members of extremist groups, the opposite is often the case. An IS supporter from the United Kingdom, interviewed by Amarasingam (2024), expressed that his online community is equivalent to his ‘whole life’ and that he never felt like he belonged anywhere except within that community. He also said: “Sometimes it’s like the person online is the real you”. For extremists, it is often in their interactions with their families, at school, or at work where they are putting on an act and not being their true selves – sometimes for the simple reason that they do not want to be ostracized or arrested for being a ‘jihadist’ or a ‘neo-Nazi’. But online, they become part of a likeminded collective, a transnational brotherhood and sisterhood that truly understands them.

There is psychological evidence suggesting that the need for belonging is strong enough for individuals to accept the goals of a group as their own, and it seems that organizations such as Islamic State have exploited this mechanism in their propaganda, calling for the union of all Muslims, regardless of race and ethnicity (Khader, 2016). This allows them to appeal to those who do not experience such acceptance in their own communities. Despite that, the importance of community for individuals who are radicalizing in the online space is still relatively understudied (Amarasingam, 2024). While numerous articles mention ‘online community’ or ‘virtual community’ in passing, there are only a handful of studies that truly unpack the concept or explore its significance in the field. Recently, this area of research has begun to receive further attention and is increasingly seen as an important field that needs further investigation. 

Extremism studies should integrate existing research on the sense of community to see if extremist communities are somehow unique. Also, an important research question would be to explore whether extremist communities online are providing individuals with the much-needed sense of belonging that, according to research on modern community trends, is slowly being lost in our everyday life (Amarasingam, 2024; Putnam, 2000). 

Amarasingam, A. (2024). Belonging is just a click away: Extremism, radicalisation, and the role of online communities. In The Routledge Handbook on Radicalisation and Countering Radicalisation (pp. 196-212). Routledge. 

Bowman-Grieve, L. (2009). Exploring “Stormfront”: A virtual community of the radical right. Studies in conflict & terrorism,32(11), 989-1007. 

De Koster, W., & Houtman, D. (2008). ‘STORMFRONT IS LIKE A SECOND HOME TO ME’ On virtual community formation by right-wing extremists. Information, Communication & Society, 11(8), 1155-1176. 

Ebner, J. (2017). The rage: The vicious circle of Islamist and far-right extremism. Bloomsbury Publishing. 

Khader, M. (2016). Combating violent extremism and radicalization in the digital era. IGI Global. 

Putnam, R. D. (2000). Bowling alone: The collapse and revival of American community. Simon and Schuster. 

Categories
Blog

The Power of Stories: How Extremists Shape Beliefs Through Narratives

Although narratology as an academic discipline has only recently been invented, people have been interested in how we tell stories for millennia (Plummer, 2012). Stories help us understand and make sense of the world around us. They can be personal but also social, collective, belonging to the group. We need stories in order to live a human life, construct, and reconstruct yesterday and tomorrow. They provide us with coherence and meaning and have the capability to turn chaos into order. They also play an important role universally, becoming road maps and key clues to unraveling cultures. On one hand stories can stimulate empathy, create connecting bonds with others, and develop dialogues, and on the other, they can raise challenges, critique, and provoke change.  

It would not be possible to achieve a long-lasting change in social structures with the sole use of force and coercion, and without the support of ‘true believers’, who share an objective based on a common story (Harari, 2014). Both far-right and Islamist extremists seem to be aware of that and have been using stories to influence the public and make them act according to the principles of their ideological framework. As Julia Ebner wrote in her book (2017) studying extremism without studying stories is like studying the brain without studying neurons. Narratives have the potential to disseminate extreme ideologies. They serve as the connecting element between non-violent and violent forms of extremism and bridge the ideological spaces between far-right and Islamist extremism. The ex-English Defence League (EDL) community manager, which Ebner interviewed for her book said: ‘Radicalizing people was easy; I just had to tell better stories than the Establishment.’ 

 Ebner (2017) identifies five key elements that contribute to the efficacy of their extremist narratives: simplicity, consistency, responsiveness, identification, and inspiration. Firstly, the simplicity of black-and-white narratives can bring comfort by eliminating the complexities and ambiguities of life. Extremists provide clear and simple answers to complex phenomena observed in our global environment (Ebner, 2017). People are often drawn to simple, binary answers for several reasons such as cognitive ease, certainty, and security, as well as emotional appeal. Complex issues can be difficult to understand and deal with. Binary answers offer a sense of cognitive ease because they provide clear solutions without the need for deep thought or analysis (Fiske & Taylor, 1984; Kahneman, 2011; Korteling et al., 2023) . Such dichotomy also provides a sense of certainty and security in an uncertain world (Fisher & Keil, 2018). It offers clear guidelines and directions, which can be comforting in times of confusion or chaos. Furthermore, simple answers often appeal to people’s emotions, offering straightforward narratives that resonate with their fears, frustrations, or desires. This emotional connection can make binary solutions more compelling than nuanced, complex ones. Although we attempt to structure our understanding of the world through rational analysis, we often rapidly and instinctively engage in emotional binary framing (Kahneman, 2011). Pejorative, fear-based binary framing of the other is the most protective Darwinist response we can have, which keeps us alert and cautious (Bishop, 2023). Evolutionarily derived fear triggers and the cognitive preference for dichotomy do not need to wait for sophisticated arguments. 

Secondly, compelling stories are characterized by consistency, which is critical not only in maintaining a coherent and uniform narrative over time to build trust and credibility, but also in ensuring that actions align with the narrative to preserve legitimacy (Ebner, 2017). This consistency can sharply contrast with the often-observed inconsistency within established institutions. When mainstream groups fail to maintain narrative consistency or align their actions with their words, it can foster public distrust. In contrast, groups that maintain consistency can leverage these institutional failures, positioning themselves as more trustworthy or genuine alternatives, thereby attracting those disillusioned with the establishment. 

Thirdly, responsiveness refers to the ability to address the grievances and aspirations of the population—issues often neglected by those in power (Ebner, 2017). Extremist narratives often exploit societal dissatisfaction presenting themselves as the solution to perceived injustices and promising radical change. By addressing the concerns of marginalized groups, extremists can gain support and legitimacy, further strengthening their narrative.

Additionally, the appeal of radical groups lies in creating a strong sense of belonging and a possibility of identification with other group members (Ebner, 2017). Homogeneity of the in-group is fostered through common language, customs, and symbols. The narratives often provoke empathy for certain protagonists and hatred for antagonists. For instance, the now archived, Facebook page of the German neo-Nazi terrorist group Oldschool Society, shows pictures of its members hugging each other and celebrating together. 

Lastly, the capacity of extremist narratives to inspire action is critical. The successful stories create a desire to resolve a real or perceived conflict (Burke, 1989). Extremists often build on the narrative of victimhood and imply that solving the threat is only possible by eliminating the other, whether metaphorically or literally. The desired ‘happy end’ may involve the annihilation of a race, religion, or a social class, often expected after the ‘final battle’, the ‘inevitable war’, or the ‘final solution’ (Ebner, 2017). This is demonstrated in an extract from the Islamic State’s Dabiq magazine: “We target the crusaders, and we will eradicate and distinguish them, for there are only two camps: the camp of truth and its followers, and the camp of falsehood and its factions” (“A Call to Hijrah,” September 2014). 

References

A Call to Hijrah. (September 2014). Dabiq

Bishop, K. R. (2023). American Binary Thinking: Psychological Foundations, Religious Framing, and Media Reinforcement. 

Burke, K. (1989). On symbols and society. University of Chicago Press. 

Ebner, J. (2017). The rage: The vicious circle of Islamist and far-right extremism. Bloomsbury Publishing. 

Fisher, M., & Keil, F. C. (2018). The Binary Bias: A Systematic Distortion in the Integration of Information. Psychological Science, 29(11), 1846-1858. https://doi.org/10.1177/0956797618792256

Fiske, S. T., & Taylor, S. E. (1984). Social Cognition. In: Random House, New York.

Harari, Y. N. (2014). Sapiens: A brief history of humankind. Random House. 

Kahneman, D. (2011). Thinking, fast and slow. macmillan. 

Korteling, J. E., Paradies, G. L., & Sassen-van Meer, J. P. (2023). Cognitive bias and how to improve sustainable decision making. Frontiers in Psychology, 14, 1129835. 

Plummer, K. (2012). A manifesto for stories: Critical Humanist notes for a narrative wisdom. In.

Categories
Blog

Silencing or Strengthening? The Ongoing Debate Over Deplatforming Extremists 

Deplatforming involves permanently removing controversial figures from social media sites to reduce the spread of harmful or offensive content. This approach has been increasingly adopted by platforms like Facebook, Twitter, and YouTube, targeting numerous high-profile influencers (Jhaver et al., 2021). Despite its intentions, the effectiveness of deplatforming remains hotly debated, particularly after Twitter’s 2016 ban of several alt-right accounts led to a surge in users on Gab, known for its lax moderation and as a ‘free speech’ alternative to Twitter (Rogers, 2020). Among its new users were figures like Robert Bowers, the perpetrator of the 2018 Pittsburgh synagogue shooting, and Milo Yiannopoulos, a right-wing provocateur banned from Twitter for targeted harassment. Additionally, many extremists have migrated to Telegram, which offers secure messaging and has been criticized for its lenient stance on extremist content, thereby allowing such material to persist longer than it might on more mainstream platforms (Shehabat et al., 2017). Telegram’s features, such as public channels and private chats, make it a potent tool for extremist groups, enabling them to broadcast to followers and organize through secure chats. Notably, the platform’s confidence in its security measures led it to offer a $300,000 prize twice to anyone who could break its encryption (Weimann, 2016).

This backdrop sets the stage for a broader critique. Critics point out that deplatforming simply relocates extremists to other online spaces, thus passing the problem elsewhere and potentially strengthening the convictions and distrust of their followers towards society and mainstream information sources (Rogers, 2020). Another significant concern is the role of social media companies as arbiters of speech. By assuming the power to deplatform, these companies take on a quasi-judicial role in determining what speech is acceptable. This raises questions about the concentration of power in the hands of private entities, the potential for biased enforcement of rules, and the impact on freedom of expression and democratic discourse. The fear is that such power could be misused to silence legitimate dissent or favor certain political viewpoints. Critics also argue that deplatforming may inadvertently draw more attention to the suppressed content, a phenomenon known as the Streisand Effect. This term stems from a 2003 incident when Barbra Streisand unsuccessfully sued photographer Kenneth Adelman and Pictopia.com for privacy violation over an aerial photograph of her house, leading to vastly increased public interest in the photo.

In contrast, supporters argue that deplatforming cleanses online spaces and limits the reach of extremist content creators. While these individuals can easily find alternative online spaces to share their ideologies, their overall impact is arguably reduced on less popular platforms. Indeed, several studies confirm the effectiveness of deplatforming. For instance, the one conducted by Jhaver et al. (2021), suggests that deplatforming can decrease activity levels and toxicity among supporters of deplatformed figures. In another study (Rogers, 2020) it was observed that banned celebrities who migrated to Telegram experienced reduced audience engagement and milder language. Conversely, Ali and colleagues (2021), who analyzed accounts on Gab suspended from Twitter and Reddit, noted increased activity and toxicity but, in line with other studies, a decrease in potential audience size.

Given these mixed outcomes, there’s a clear need for further research to assess deplatforming’s effectiveness comprehensively. A systematic analysis across various platforms could provide a clearer understanding of deplatforming’s consequences, informing future strategies for managing online extremism.

Ali, S., Saeed, M. H., Aldreabi, E., Blackburn, J., De Cristofaro, E., Zannettou, S., & Stringhini, G. (2021, 2021). Understanding the effect of deplatforming on social networks.

Jhaver, S., Boylston, C., Yang, D., & Bruckman, A. (2021). Evaluating the effectiveness of deplatforming as a moderation strategy on Twitter. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-30.

Rogers, R. (2020). Deplatforming: Following extreme Internet celebrities to Telegram and alternative social media. European Journal of Communication, 35(3), 213-229.

Shehabat, A., Mitew, T., & Alzoubi, Y. (2017). Encrypted jihad: Investigating the role of Telegram App in lone wolf attacks in the West. Journal of strategic security, 10(3), 27-53.

Weimann, G. (2016). Terrorist migration to the dark web. Perspectives on Terrorism, 10(3), 40-44.

Categories
Blog

Researching Extremes: The Fine Line of Consent in Online Radicalization Studies

Research on online radicalization operates within a complex web of ethical and legal constraints. While the pursuit of knowledge in this field is crucial, it must be approached with a thorough understanding of these challenges. Researchers are tasked with the delicate balance of advancing academic inquiry while upholding ethical standards and legal requirements. Only through such responsible research practices can the field progress in a manner that is both legally sound and ethically robust.

One of crucial ethical aspects to consider is obtaining informed consent, which as described by Reynolds (2012), is a significant ethical challenge in academic research on online radicalization. Traditionally, informed consent is essential in human subject research, but its application in online environments, especially in public chat rooms or dynamic social media groups, might be tricky and involve certain negative consequences.  

First, when dealing with online communities of extreme nature, by seeking consent we risk alerting the group members’ behavior, as well as potential deletion of certain posts. This could jeopardize the authenticity of the data’s naturalistic setting and the overall validity of the research, which would undermine the goals of the study.

Second, revealing the researcher’s presence might risk reprisals from the subjects against the researcher and the team. Internet research on radicalization, while digital, still encompasses the communication of real individuals and should be treated as fieldwork in potentially risky environments. The necessity of maintaining covertness under such circumstances has been previously addressed in the literature (Lee-Treweek & Linkogle, 2000).

Extreme online communities are vigilant about their security, often closely monitoring group interactions to identify and remove anyone deemed ‘unfriendly’ or suspicious. This vigilance is not just about maintaining group integrity but also about controlling the flow of information. In his article, Reynolds (Reynolds, 2012) mentions a specific online community, where the designated security officer successfully detected and exposed trolls or spies. More than twelve individuals identified in this manner were publicly named and subsequently expelled from the group.

This practice of strict surveillance and control extends to academic researchers as well. Hudson and Bruckman (2004)encountered this directly in their study. When attempting to obtain informed consent from participants, the researchers frequently faced resistance and exclusion. They were expelled from chat rooms 72% of the time when requesting participants to opt out of the study and 62% of the time when asking for opt-in consent. This high rate of sanction demonstrates the challenges researchers face when studying online environments. Consequently, Hudson and Bruckman suggest that waiving informed consent might be a more feasible approach in such settings, where the standard practice of obtaining consent is impractical due to the heightened sensitivity and guarded nature of these online communities. Nevertheless, the mentioned study was conducted prior to the introduction of General Data Protection Regulation (GDPR) that is our current legal norm.

These important regulations were considered in the 2021 report by Sold and Junk, titled Researching Extremist Content on Social Media Platforms: Data Protection and Research Ethics Challenges and Opportunities (Sold & Junk, 2021). The authors highlight that the legal regulations, particularly those outlined in the GDPR, play a crucial role in navigating the challenges of obtaining informed consent.

For instance, they mention Article 9(2)(e), which addresses a scenario in which researchers may utilize data if the data subject has consciously chosen to publish sensitive information. It lifts the data processing prohibition outlined in paragraph 1 of this article, signaling that the data subject, through conscious publication, acknowledges that their data may be used for research purposes. This waiver of the special protection under Article 9 suggests that the data subject may perceive the information as no longer requiring specific safeguards. However, it is essential to note that even when data is consciously published by the individual, it does not entirely forego the protections of the GDPR. Notably, Article 6 remains applicable, emphasizing that the processing of data, even when Article 9 protections are waived, still requires a legal basis. The lawful bases listed in Article 6 include:

  • The necessity of processing for the performance of a contract.
  • Compliance with a legal obligation.
  • Protection of vital interests.
  • Consent
  • The performance of a task carried out in the public interest or in the exercise of official authority.
  • Legitimate interests pursued by the data controller or a third party.

This underscores the GDPR’s commitment to ensuring that the processing of personal data, whether sensitive or not, is conducted within a robust legal and ethical framework. This requires a careful balance between research interests and the data subject’s legitimate interests. Notably, processing without consent is permissible only in limited circumstances, such as when the public interest in the research project outweighs the data subject’s interests.

Furthermore, Article 9(2)(j) of the GDPR provides specific rules for processing special categories of personal data for research purposes and these rules apply irrespective of whether researchers seek informed consent from participants. These special categories encompass sensitive information such as racial or ethnic origin, political opinions, religious or philosophical beliefs, and others. Processing personal data falling under these categories for research demands a meticulous approach. Researchers must demonstrate the specific research question, establish the impracticability of the project without the data, and conduct a careful balancing act to showcase that the research interest significantly outweighs the data subject’s interest in data protection. Adherence to principles of necessity, appropriateness, and proportionality in data processing, as well as the establishment of data access regulations, is necessary to ensure full compliance with data protection regulations. Including these legal considerations in online radicalization research is essential to ensure that studies are conducted with strong ethical foundations and in compliance with the law in this challenging field.

References

European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Retrieved from: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679

Hudson, J. M., & Bruckman, A. (2004). “Go away”: Participant objections to being studied and the ethics of chatroom research. The information society, 20(2), 127-139.

Lee-Treweek, G., & Linkogle, S. (2000). Danger in the field: Risk and ethics in social research. Psychology Press.

Reynolds, T. (2012). Ethical and legal issues surrounding academic research into online radicalisation: a UK experience. Critical Studies on Terrorism, 5(3), 499-513.

Sold, M., & Junk, J. (2021). Researching Extremist Content on Social Media Platforms: Data Protection and Research Ethics Challenges and Opportunities (Kings’s College ICSR London: GNET-Report., Issue. https://gnet-research.org/wp-content/uploads/2021/01/GNET-Report-Researching-Extremist-Content-Social-Media-Ethics.pdf