Two weeks after Russia invaded Ukraine, the Meta technology company took a decision to permit threats of violence towards Russian soldiers in Facebook posts. As part of what it billed a temporary change, users in 12 countries (all former Communist states) are now allowed to post death threats and calls to incite violence on Russians in the context of the war. The firm has defended its stance by pointing to context: “this specific context, ‘Russian soldiers’ is being used as a proxy for the Russian military. The Hate Speech policy continues to prohibit attacks on Russians”.
In other words: “we found hate speech that’s OK to Like”. *winking smiley face* *water pistol* *Russian flag*.
Russian current and past military personnel, and presumably the 53 per cent of Russians who allegedly support the invasion of Ukraine, are now unprotected from discrimination, harassment, and violence on the platform. At least a subset of these individuals are certainly victims themselves. Mandatory 12-month conscription in the military for anyone born in Russia now justifies death threats. The situation is apparently set to remain under review at Facebook. But it sets a dangerous precedent. Who will Facebook unlike next?
The range and fallout of such decisions is unknown. Already bakeries in Europe are renaming
Russian torts in a fashion reminiscent of “freedom fries.” This hate will hurt us all. And it will start small. I have already deprived myself of pelmeni by second-guessing a trip to the local Russian store. Intellectual isolation will take this “temporary” policy change and cascade into filter bubbles and echo chambers that amplify hate and its consequences.
The development sheds light on two issues long overdue deeper examination by democratic leaders, and national and European regulators. The first relates to the distinct similarities between the decisions reached by such companies and the motivations of state-backed influence operations. The second is the effective privatisation of what should be a global commons informed and overseen by accountable leadership that weighs up all considerations – from ethical dilemmas around free speech to the management of a state’s international relations.
On the first issue, it will be an uncomfortable fact for Facebook and others to acknowledge, but the effect of such hate-speech decisions is more than comparable to the influence that Russia aims for with its army of internet trolls. (Even if Russia with, of course, no sense of irony reacted to Meta’s decision by calling for the company to face justice for “extremist activities.) Already the war in Ukraine has led to a rise in “anchoring messaging” from Russian influence operations in Ukraine: for example, the Kremlin’s message of ”denazification” efforts to frame Ukraine, NATO, and the United States as the aggressors. The point of an anchor message is twofold: those in the camp already buy the message wholesale; meanwhile, those in the middle are dragged closer to the mouthpiece’s goal – where once you would think twice about a €300 steak, now the €100 T-bone looks like a bargain. In the same way, the allowing of violent hate speech on Facebook could still have the effect of enabling “Russophobia or any kind of discrimination, harassment or violence towards Russians” – to borrow the words of Meta’s president for global affairs, Nick Clegg, as he sought to defend the decision as still acting to prevent these things.
Similarly, four years ago Facebook’s then CEO Mark Zuckerberg declared: “My personal challenge for 2018 has been to fix the most important issues facing Facebook — whether that’s defending against election interference by nation states, protecting our community from abuse and harm, or making sure people have control of their information and are comfortable with how it’s used.”
But why should the power to choose whether or not to protect people from harm and abuse lie with the wishes of one man, or a small number of people (who are largely men)?
Indeed, the power that such major tech firms possess to facilitate the spread of hate speech remains poorly regulated. And, whatever their claims to the contrary, they are motivated by the need to keep their users engaged, on their platforms and aways scrolling – with shareholder value driving it all. Where some countries have taken steps to regulate in this arena, rules appear still to be inadequate. Several years ago Germany introduced a law that allows firms such as Facebook to receive heavy fines of up to $57m. This effort was in part motivated by an increase in hate speech directed at immigrants from mostly Muslim parts of the world following former German chancellor Angela Merkel’s 2015 decision to keep the border open to refugees. Regular updates are made to this law to ensure hate is well addressed. It would be obscene if it were to be drawn back. But Germans can now view violent speech posts from the dozen former Communist states – with a similar spreading effect of the anchoring message still enabled.
As the war compels Europeans to recast their strategies in the realms of energy, finance, and defence, they should now turn also to the online world where so much international conflict is already playing out. Seminar rooms have been filled for years with discussions of finally doing this, but for too long the European Union and member states have failed to put in place rules that are fit for the digital era. These new rules should recognise the mushrooming power of online speech and demand transparency, consistency, and accountability from corporate decisions.
Meta’s move should prompt national and EU policymakers to shore up their capabilities to guide, regulate, and develop tech providers for grown-up, next-generation internet – not leave it to all-powerful private decision-makers. Where the web was once a royalty-free public domain, it has become parcelled up and walled off. To ensure an accessible, fair, and safe public resource – like any other commons – political decision-makers need to take steps now to ensure that the public good rather than self-interest becomes the guiding principle.
In this new space, accountable figures would weigh up these vital questions of privacy and free speech, doing so not in in isolation from wider imperatives but in full consideration of them. Of course, they must listen carefully too to tech leaders and creators and the insight and expertise they possess as they build the new framework.
As the new forms of the internet evolve, they should take charge now of guiding and investing in developing new protocols for developments such as the Metaverse. This would be in contrast to the hands-off way in which platforms such as Facebook were allowed to emerge with too little thought to the wider implications, including the influencing of public opinion and even elections. Proper public digital literacy programmes are also needed to help people understand how to critique issues such as decisions around hate speech. Alongside this, states need to undertake further military training programmes in cyber-defence and develop tactics to counter influence operations.
Decision-makers should take a more active role in developing existing connectivity and broadcast infrastructure, empower local tech initiatives, and commit to European investment in tech. They should make use of talented displaced Ukrainian techies and provide new features; not just regulation but capabilities.
Haters gonna hate, but we don’t have to like it.
The European Council on Foreign Relations does not take collective positions. ECFR publications only represent the views of their individual authors.