The internet has become an integral part of our lives and critical to the functioning of modern society. But this dependence clearly has downsides that extremists and terrorists have learned to exploit.
Militant groups across the ideological spectrum have turned to the internet to recruit new members, to disseminate their propaganda, and to securely communicate with each other. Their activities represent a threat to both on- and off-line communities.
The scourge of internet radicalisation has prompted governments and companies to address the issue through a wide variety of measures, ranging from intelligence-gathering to censorship. To date, however, the policies adopted, especially the more intrusive ones, have attacked the problem from only one side.
In a networked era, fighting extremism requires a networked approach. Top-down government-led initiatives simply won’t work; at best they simply divert extremists to a different platform; at worst they oppress legitimate speech.
Part of the solution is to rely on the internet’s most essential strength – its vast army of ordinary users. Governments need to empower them with facts and encourage them to disseminate viral counter-narratives. They have the power to do so.
The online activities of jihadi groups have become a focal point for policymakers since the rise of the Islamic State’s cutting-edge propaganda machine, which, in addition to glossy magazines such as Dabiq and Rumiyah, includes Hollywood-esque online videos and games. At a lower level of sophistication, jihadi ideology is easily accessible through faux-scholarly online videos, such as those uploaded by hate preachers and devoured by London Bridge attacker, Khuram Butt.
It is not just Islamists, of course, who are taking advantage of online video platforms and bullet-proof hosting providers. In France, the extreme-right is organized online network -termed the fachosphère – consists of a set of highly active blogs, forums, and sites such as Fdesouche, that preach violence against foreigners. Recently a 23-year old far right extremist was arrested for plotting the assassination of President Macron, having announced his plans in a video game chatroom at Jeuxvideo.com.
On the other end of the political spectrum, the extreme left in Europe is active on platforms such as indymedia.org. The riots that raged in Hamburg during the G20 Summit, for example, were predominantly organised on this platform with the aim of bringing together various smaller groups under the umbrella of the militant ‘black bloc’ movement. The vocal propaganda machine, run by ANTIFA and other leftist organizations to protest globalization in general, easily enabled militant groups to identify and connect with each other through comment sections and forums. The unprecedented level of violence during the G20 Summit has now sparked calls in Germany to create a European database that would monitor violent left extremists, and to fight the militant left in the same manner as the country tackles far-right groups.
The internet did not create radicalisation. But the effortless distribution of content and far-ranging reach of online propaganda is catalysing recruitment opportunities and calls for violence in a globalised and interconnected world. So what measures have governments, companies, and NGOs implemented to tackle this problem?
To date, the dominant approach pushed by European policy-makers and companies alike is to focus on the restriction and removal of internet content in order to contain the spread of extremists' messaging efforts. These 'hard' measures have been implemented at the EU-level by, for example, the establishment of Europol’s Internet Referral Unit (IRU) in 2015. The IRU’s mission is to support EU member states in the flagging, referral, and removal of extremist content in close-cooperation with online service providers. In the IRU’s first year alone, 8,949 pieces of content, primarily of jihadist nature, were successfully removed.
More recently, following a meeting between Theresa May and Emmanuel Macron, the UK and France agreed to launch a bilateral campaign to tackle online radicalisation by imposing fines on tech companies that fail to remove extremist content. Similarly, on June 30 the German Bundestag passed the Netzwerkdurchsetzungsgesetz, which creates a 24-hour deadline for social media platforms to remove online hate speech or face fines of up to €50 million.
Such measures have elicited strong criticism. Peter Neumann, Director of the International Centre for the Study of Radicalisation at King’s College London, for example notes that, “Censorship has caused extremists to go elsewhere. It may temporarily help to disrupt their online activities, but it doesn’t eliminate them.”
Content removal also pushes extremists toward more private communication channels and encourages the adoption of sophisticated tools to hide and spread their online messages. Twitter’s suspension of more than 200,000 extremists’ accounts in August 2016, for instance, was followed by the widespread migration of extremist content from open and public platforms to end-to-end encrypted messaging services such as Telegram and WhatsApp, thereby making it more difficult for law enforcement agencies to monitor extremist activities.
Censorship policies have also been condemned by human rights advocates, as they are widely seen as violating freedom of speech, and as feeding narratives of government oppression.
Another approach to tackle online radicalisation is grounded in intelligence collection and evidence gathering. Usually facilitated in close partnership with law enforcement, social media companies, and internet service providers, its overarching goal is to map extremist networks, identify their capabilities, and highlight potential terrorist targets.
Intelligence collection works hand-in-hand with the removal of content, as practiced by the IRU and the ‘Check the Web’ portal, a database created in 2007 to map jihadist online activities. However, increased censorship and deletion of content by service providers makes the mapping of networks extremely challenging for law enforcement because it thins out valuable intelligence and hides notable relationships.
The intelligence-led approach has proven to be very successful in instances in which ordinary internet users were made part of the process. This was the case in the example of the wannabe assassin who plotted to kill Macron, whose chat messages were reported by anonymous internet users to Pharos, an online portal developed for the French Ministry of the Interior. The platform was designed to prompt government action by investigating the authenticity and threat level of reported messages and online content.
Getting Ahead of Radicalisation
An even more effective approach would involve empowering internet users by providing them with the facts and narratives to counter extremists and to burst the ‘echo chamber’ bubbles that isolate potential recruits from alternative views.
There has been some progress in this area. At the EU level, this approach was endorsed by the European Commission with the creation of the Radicalisation Awareness Network (RAN) in 2012. RAN’s Communication and Narrative Working Group is focused on creating communication and counter-messaging strategies to challenge extremist content online. However, this initiative, as well as other ‘soft’ approaches introduced at various national levels, are still under-developed and in their conceptual phase when compared to more intrusive means such as the deletion of content and censorship.
Tech companies have also taken a proactive stance in developing soft-measures. Founded by Facebook in January 2016, the Online Civil Courage Initiative provides training to NGOs and grassroots activists to support ‘counter-speech’ online.
Similarly, Google’s technology incubator Jigsaw and UK start-up Moonshot CVE have teamed up to create a mechanism whereby people searching for extremist content are shown counter-messaging advertisements alongside the search results.
While the effects of soft measures remain difficult to quantify, the results appear to have a positive impact. According to Kent Walker, General Council at Google, “In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.”
Whether this exposure translates into offline behavioural change remains to be seen, and some critics have argued that the counter-narrative approach is too weak to sway believers surrounded by extremist influences in their offline environment. Some have even suggested that counter-narratives may inadvertently reinforce extremists’ narratives by further disseminating them. On the other hand, other critics argue that the counter-narrative approach is unethical because of the potentially unchecked biases of the counter-narrative authors.
To date counter-narrative programs have been implemented through a top-down approach, which has largely failed to empower the average user. Such an approach might have worked in the pre-digital era when counter-propaganda efforts could flow through a limited number of radio and television stations. It won’t work in an age in which people get their information through a wide variety of media channels.
For soft measures to be effective, they need to stimulate a bottom-up movement. The ICSR suggests the establishment of a start-up fund that would support counter-extremist grassroots initiatives, or the creation of an ‘Internet Users Panel’ that could help self-regulate the internet and enable online communities to police themselves. Both governments and private companies would do well to experiment with this type of measures to galvanise ordinary citizens to help tackle online radicalisation.
Ultimately, to stop hate messages from spreading, top-down government action is at best a partial solution to a very complex problem. Ordinary internet users, with the support of political institutions and the private sector, need to be included in the process of fighting radicalisation online.
The European Council on Foreign Relations does not take collective positions. ECFR publications only represent the views of its individual authors.