With many worried about a Russian information offensive in the West, European states are in the process of developing defence mechanisms. Unfortunately, several seem to be reacting with a legalistic approach that will likely do more harm than good.
France, Germany, Italy and the UK are among those setting up measures to identify, block or remove ‘fake news’ from the internet. All these proposals suffer from the same problem: an inability to objectively and usefully define fake news without veering into political censorship.
As many experts are warning, ‘fake news’ is becoming a weaponised, politicised term, applied to everything from genuine hoaxes to merely disputed opinions. To further confuse things, hate speech, propaganda, and even satire seem to be falling under this umbrella.
Measures to fight fake news must therefore choose a definition from this spectrum. A strict definition of ‘fake news’ would only target completely fabricated information, spread with intent to influence public opinion. A good example would be the story that claimed that Pope Francis had endorsed Donald Trump.
If governments lean towards such a restrictive definition, as France seems to be doing, they risk inefficacy. Pure hoax stories are just a tiny fraction of today’s ‘information disorder’ and they are not the primary type of content amplified by automated bots and trolls, which rely heavily on opinionated content and even regular mainstream and local media contents.
There is also no consensus that fake news, strictly speaking, has any significant impact on elections. This is likely because most fake stories come from hyper-partisan outlets that have a small audience – and one which already holds highly polarised opinions. As such, fake stories are unlikely to convince undecided voters or even to be the difference between someone turning out to vote or staying at home.
For these reasons, even if identified fairly and efficiently, acting strictly on fake news is unlikely to prevent malicious actors from meddling in our elections or polarising our societies.
If, on the other hand, governments choose a broader definition of ‘fake news’ that includes malicious and misleading – but opinionated – content, then they face a massive problem of objectivity and legitimacy. Deciding what counts as a ‘malicious, misleading opinion’ as opposed to just a ‘different opinion’ is an inevitably political choice.
One should also remember that such content appears in mainstream media outlets as well as fringe or foreign sources. Mainstream media outlets have greater reach and greater power to influence public opinion, and while they generally offer more trustworthy reporting, commercial pressure for more sales, more clicks and more views, paired with partisan bias, can lead to divisive and polarising coverage. This was clear in the American electoral campaign, the Brexit campaign and the Catalan independence movement.
But institutional efforts to regulate this type of disinformation risk being skewed in favour of domestic actors and the political forces that have pushed them. It is therefore unlikely that mainstream outlets (such as state broadcasters) will be targeted to the same degree as alternative sources, despite their greater impact on public opinion.
In the long term this politicised approach can only deepen the cleavage between those who already trust the political mainstream and those who don’t – perhaps pushing the latter further into the arms of hyper-partisan alternative sources.
I am not advocating a soft approach to foreign information offensives in Europe. It’s just that the vague ‘fake news’ frame is misplaced, and the proposed measures seem ineffective at best, counter-productive at worst. While one should not take lightly the efforts made by foreign powers to interfere in domestic politics, it is just as important to remain vigilant about threats to free speech and freedom of the press. These are essential sources of legitimacy for any democratic system, and if double or sloppy standards are applied, you can expect to see trust in institutions and mainstream media drop further.
European governments and the European institutions should therefore distinguish firmly between how to respond to ‘technical’ and ‘cognitive’ offensives being waged by foreign and domestic actors, understanding as the former those related to hacks and cyber-espionage and as the latter those aimed at modifying public opinion.
Technical assaults, such as the criminal hack and release of emails from the Democratic National Committee and Hillary Clinton's campaign chairman John Podesta must be prevented. But this has nothing to do with ‘fake news’ – the emails were not doctored and were spread largely through mainstream media (although amplified via the usual channels by activists and bots). They were ultimately part of a cognitive offensive that aimed at worsening public perception of Clinton, but the illegal bit was the hack itself, not the resulting information spread.
Cognitive threats, however, should not be tackled with restrictive measures, for the reasons outlined above. Instead, information experts generally agree that institutions should focus their cognitive efforts in research, transparency, and education of citizens.
Much research needs to be done, and much data collected, to understand the phenomena in detail. Tech companies, Facebook and Twitter in particular, should be asked to give data to researchers and be transparent in the way they apply community standards, regulations and advertisement targeting.
Governments and grant-makers should direct more funding to companies and NGOs working on journalistic and technological solutions to monitor the quality of information, and to those that produce non-partisan, independent fact-checking.
Finally, great efforts must be taken to ensure that citizens have the right tools to navigate the accelerated information flows of the internet era. Both public and private institutions should fund digital media literacy programmes for children and adults alike.
These are all long-term policies that include many independent actors and might not have immediate effects, but that could finally have a significant positive impact without risking further polarisation in societies already on edge.
Lorenzo Marini is web coordinator at ECFR and co-creator of YouCheck, a social platform to verify information with expert sources
The European Council on Foreign Relations does not take collective positions. ECFR publications only represent the views of their individual authors.