Categories
Blog

Silencing or Strengthening? The Ongoing Debate Over Deplatforming Extremists 

Deplatforming involves permanently removing controversial figures from social media sites to reduce the spread of harmful or offensive content. This approach has been increasingly adopted by platforms like Facebook, Twitter, and YouTube, targeting numerous high-profile influencers (Jhaver et al., 2021). Despite its intentions, the effectiveness of deplatforming remains hotly debated, particularly after Twitter’s 2016 ban of several alt-right accounts led to a surge in users on Gab, known for its lax moderation and as a ‘free speech’ alternative to Twitter (Rogers, 2020). Among its new users were figures like Robert Bowers, the perpetrator of the 2018 Pittsburgh synagogue shooting, and Milo Yiannopoulos, a right-wing provocateur banned from Twitter for targeted harassment. Additionally, many extremists have migrated to Telegram, which offers secure messaging and has been criticized for its lenient stance on extremist content, thereby allowing such material to persist longer than it might on more mainstream platforms (Shehabat et al., 2017). Telegram’s features, such as public channels and private chats, make it a potent tool for extremist groups, enabling them to broadcast to followers and organize through secure chats. Notably, the platform’s confidence in its security measures led it to offer a $300,000 prize twice to anyone who could break its encryption (Weimann, 2016).

This backdrop sets the stage for a broader critique. Critics point out that deplatforming simply relocates extremists to other online spaces, thus passing the problem elsewhere and potentially strengthening the convictions and distrust of their followers towards society and mainstream information sources (Rogers, 2020). Another significant concern is the role of social media companies as arbiters of speech. By assuming the power to deplatform, these companies take on a quasi-judicial role in determining what speech is acceptable. This raises questions about the concentration of power in the hands of private entities, the potential for biased enforcement of rules, and the impact on freedom of expression and democratic discourse. The fear is that such power could be misused to silence legitimate dissent or favor certain political viewpoints. Critics also argue that deplatforming may inadvertently draw more attention to the suppressed content, a phenomenon known as the Streisand Effect. This term stems from a 2003 incident when Barbra Streisand unsuccessfully sued photographer Kenneth Adelman and Pictopia.com for privacy violation over an aerial photograph of her house, leading to vastly increased public interest in the photo.

In contrast, supporters argue that deplatforming cleanses online spaces and limits the reach of extremist content creators. While these individuals can easily find alternative online spaces to share their ideologies, their overall impact is arguably reduced on less popular platforms. Indeed, several studies confirm the effectiveness of deplatforming. For instance, the one conducted by Jhaver et al. (2021), suggests that deplatforming can decrease activity levels and toxicity among supporters of deplatformed figures. In another study (Rogers, 2020) it was observed that banned celebrities who migrated to Telegram experienced reduced audience engagement and milder language. Conversely, Ali and colleagues (2021), who analyzed accounts on Gab suspended from Twitter and Reddit, noted increased activity and toxicity but, in line with other studies, a decrease in potential audience size.

Given these mixed outcomes, there’s a clear need for further research to assess deplatforming’s effectiveness comprehensively. A systematic analysis across various platforms could provide a clearer understanding of deplatforming’s consequences, informing future strategies for managing online extremism.

Ali, S., Saeed, M. H., Aldreabi, E., Blackburn, J., De Cristofaro, E., Zannettou, S., & Stringhini, G. (2021, 2021). Understanding the effect of deplatforming on social networks.

Jhaver, S., Boylston, C., Yang, D., & Bruckman, A. (2021). Evaluating the effectiveness of deplatforming as a moderation strategy on Twitter. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-30.

Rogers, R. (2020). Deplatforming: Following extreme Internet celebrities to Telegram and alternative social media. European Journal of Communication, 35(3), 213-229.

Shehabat, A., Mitew, T., & Alzoubi, Y. (2017). Encrypted jihad: Investigating the role of Telegram App in lone wolf attacks in the West. Journal of strategic security, 10(3), 27-53.

Weimann, G. (2016). Terrorist migration to the dark web. Perspectives on Terrorism, 10(3), 40-44.

Categories
Blog

Researching Extremes: The Fine Line of Consent in Online Radicalization Studies

Research on online radicalization operates within a complex web of ethical and legal constraints. While the pursuit of knowledge in this field is crucial, it must be approached with a thorough understanding of these challenges. Researchers are tasked with the delicate balance of advancing academic inquiry while upholding ethical standards and legal requirements. Only through such responsible research practices can the field progress in a manner that is both legally sound and ethically robust.

One of crucial ethical aspects to consider is obtaining informed consent, which as described by Reynolds (2012), is a significant ethical challenge in academic research on online radicalization. Traditionally, informed consent is essential in human subject research, but its application in online environments, especially in public chat rooms or dynamic social media groups, might be tricky and involve certain negative consequences.  

First, when dealing with online communities of extreme nature, by seeking consent we risk alerting the group members’ behavior, as well as potential deletion of certain posts. This could jeopardize the authenticity of the data’s naturalistic setting and the overall validity of the research, which would undermine the goals of the study.

Second, revealing the researcher’s presence might risk reprisals from the subjects against the researcher and the team. Internet research on radicalization, while digital, still encompasses the communication of real individuals and should be treated as fieldwork in potentially risky environments. The necessity of maintaining covertness under such circumstances has been previously addressed in the literature (Lee-Treweek & Linkogle, 2000).

Extreme online communities are vigilant about their security, often closely monitoring group interactions to identify and remove anyone deemed ‘unfriendly’ or suspicious. This vigilance is not just about maintaining group integrity but also about controlling the flow of information. In his article, Reynolds (Reynolds, 2012) mentions a specific online community, where the designated security officer successfully detected and exposed trolls or spies. More than twelve individuals identified in this manner were publicly named and subsequently expelled from the group.

This practice of strict surveillance and control extends to academic researchers as well. Hudson and Bruckman (2004)encountered this directly in their study. When attempting to obtain informed consent from participants, the researchers frequently faced resistance and exclusion. They were expelled from chat rooms 72% of the time when requesting participants to opt out of the study and 62% of the time when asking for opt-in consent. This high rate of sanction demonstrates the challenges researchers face when studying online environments. Consequently, Hudson and Bruckman suggest that waiving informed consent might be a more feasible approach in such settings, where the standard practice of obtaining consent is impractical due to the heightened sensitivity and guarded nature of these online communities. Nevertheless, the mentioned study was conducted prior to the introduction of General Data Protection Regulation (GDPR) that is our current legal norm.

These important regulations were considered in the 2021 report by Sold and Junk, titled Researching Extremist Content on Social Media Platforms: Data Protection and Research Ethics Challenges and Opportunities (Sold & Junk, 2021). The authors highlight that the legal regulations, particularly those outlined in the GDPR, play a crucial role in navigating the challenges of obtaining informed consent.

For instance, they mention Article 9(2)(e), which addresses a scenario in which researchers may utilize data if the data subject has consciously chosen to publish sensitive information. It lifts the data processing prohibition outlined in paragraph 1 of this article, signaling that the data subject, through conscious publication, acknowledges that their data may be used for research purposes. This waiver of the special protection under Article 9 suggests that the data subject may perceive the information as no longer requiring specific safeguards. However, it is essential to note that even when data is consciously published by the individual, it does not entirely forego the protections of the GDPR. Notably, Article 6 remains applicable, emphasizing that the processing of data, even when Article 9 protections are waived, still requires a legal basis. The lawful bases listed in Article 6 include:

  • The necessity of processing for the performance of a contract.
  • Compliance with a legal obligation.
  • Protection of vital interests.
  • Consent
  • The performance of a task carried out in the public interest or in the exercise of official authority.
  • Legitimate interests pursued by the data controller or a third party.

This underscores the GDPR’s commitment to ensuring that the processing of personal data, whether sensitive or not, is conducted within a robust legal and ethical framework. This requires a careful balance between research interests and the data subject’s legitimate interests. Notably, processing without consent is permissible only in limited circumstances, such as when the public interest in the research project outweighs the data subject’s interests.

Furthermore, Article 9(2)(j) of the GDPR provides specific rules for processing special categories of personal data for research purposes and these rules apply irrespective of whether researchers seek informed consent from participants. These special categories encompass sensitive information such as racial or ethnic origin, political opinions, religious or philosophical beliefs, and others. Processing personal data falling under these categories for research demands a meticulous approach. Researchers must demonstrate the specific research question, establish the impracticability of the project without the data, and conduct a careful balancing act to showcase that the research interest significantly outweighs the data subject’s interest in data protection. Adherence to principles of necessity, appropriateness, and proportionality in data processing, as well as the establishment of data access regulations, is necessary to ensure full compliance with data protection regulations. Including these legal considerations in online radicalization research is essential to ensure that studies are conducted with strong ethical foundations and in compliance with the law in this challenging field.

References

European Union. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Retrieved from: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679

Hudson, J. M., & Bruckman, A. (2004). “Go away”: Participant objections to being studied and the ethics of chatroom research. The information society, 20(2), 127-139.

Lee-Treweek, G., & Linkogle, S. (2000). Danger in the field: Risk and ethics in social research. Psychology Press.

Reynolds, T. (2012). Ethical and legal issues surrounding academic research into online radicalisation: a UK experience. Critical Studies on Terrorism, 5(3), 499-513.

Sold, M., & Junk, J. (2021). Researching Extremist Content on Social Media Platforms: Data Protection and Research Ethics Challenges and Opportunities (Kings’s College ICSR London: GNET-Report., Issue. https://gnet-research.org/wp-content/uploads/2021/01/GNET-Report-Researching-Extremist-Content-Social-Media-Ethics.pdf