Disinformation for beginners: How access to TikTok is threatening European security
TikTok is helping facilitate the spread of disinformation and enhancing cyber security threats throughout Europe. The EU should implement a four-step strategy to mitigate any nefarious impacts of the social media platform on its democratic process and security
In December 2024, evidence came to light that “foreign actors” had mounted a coordinated TikTok campaign during Romania’s presidential election to garner support for pro-Russian candidate, Calin Georgescu. As a result, the European Commission opened formal proceedings against TikTok and a Romanian court annulled the first round of the election.
Although the investigation postponed Romania’s vote until spring 2025—and moderate candidate, Nicusor Dan, ended up the victor—this development illustrates that Russian backing is allowing populist parties to exploit TikTok algorithms in favour of their own agenda in Europe. But such interference is not confined only to the continent’s east: during Germany’s February 2025 federal election, fact-checking organisations in the country reported at least 200 cases of false political statements being made on the platform. Evidence also shows that, in the recent presidential election in Poland (which returned the right-wing candidate Karol Nawrocki) the TikTok algorithm heavily favoured right-wing content over other political themes.
Indeed, according to a recent German study, Germany’s TikTok users are particularly receptive to Russian and Chinese disinformation, and far more likely to believe anti-Western and pro-authoritarian narratives that overlap with the messages of populist parties. In Germany, TikTok’s most active and dominant political party is the far-right Alternative for Germany (AfD), with the populist Sahra Wagenknecht Alliance (BSW) gaining ground. Given that TikTok is primarily used by young people in Germany (around 70% of 16 to 29-year-olds), this demographic is most at risk from nefarious actors utilising TikTok’s algorithm to gain political and social advantages, and push populist parties more aligned with their agendas.
But, for the actors behind the campaigns, capturing a young audience is just the beginning. For those propagating disinformation, their aim is to polarise societies, undermine trust in democratic institutions and strengthen political actors sympathetic to their agendas.
More access for more influence
The European Union needs to make its population aware of how both Russia and China use social media to spread disinformation. There is evidence to show that the two countries are aligning their strategies in some disinformation efforts, but each country still takes a different approach to “structure and emphasis”. Russia regards hybrid warfare as part of its strategic culture; while Poland has accused Russia of carrying out an arson attack in Warsaw, Moscow also favours “peacetime techniques” where its primary aim is to steer Western decisions in favour of Russia’s strategic interests and undermine democratic institutions.
On the other hand, Chinese ownership of the tech giant ByteDance, the parent company of TikTok, poses a different but still serious security risk. Private messages on TikTok are not end-to-end encrypted and, even if an app store has checked and approved its installation, future app updates may contain malware. Since Chinese companies such as ByteDance are under the influence of the Chinese Communist Party (CCP) (and the corporation collects vast amounts of user data, which it must make available to the Chinese authorities on request) the access to user information on ByteDance’s apps like TikTok more easily enables Beijing to actively gather data on the US and European citizens. In this respect, TikTok enabling access to information such as mobile location tracking data could offer China broader strategic geopolitical advantages.
Four-step plan
In theory, the EU has a strong set of rules to regulate social media platforms. Notably, Ireland’s privacy regulator, the Irish Data Protection Commission, recently fined TikTok €530m for illegally transferring European user data to China. The European Commission has also opened formal proceedings against TikTok for electoral risks under the Digital Services Act (DSA), although the investigation remains ongoing. But the urgency of the threat demands stricter measures and swifter implementation: the EU is still refining its use of the DSA and smooth enforcement lags behind the rapid spread of online disinformation.
To create a safer digital environment, European governments—led by the EU—need to pursue a four-step strategy which focuses on curbing the spread of disinformation on social media platforms, and mitigating the impact of TikTok’s Chinese affiliation.
1.TikTok usage ban on official devices
Given the security risks, public administration bodies in EU member states need to ban TikTok and other ByteDance apps from official devices used by state authorities, Europe-wide. Many countries have already done so, but enforcement is inconsistent across Europe. Public sector employees in member states and EU institutions should also receive training in media literacy, cybersecurity and digital hygiene to build internal resilience against foreign influence and cyberattacks.
2. Leverage regulatory pressure
The EU must apply stronger legal pressure on ByteDance using its existing regulatory tools—including the DSA and the General Data Protection Regulation (GDPR)—until it feels the pressure of financial penalties and complies with the EU legal framework. It should also speed up its work on the issue: the EU’s investigation into TikTok’s handling of the 2024 Romanian presidential election, whose first round took place over six months ago, is active but unresolved. Ireland, which hosts the European headquarters of many big tech firms, is overburdened with enforcement responsibilities and often criticised for lax GDPR enforcement. And, as EU digital legislation continues to evolve, overlapping responsibilities across regulatory bodies is further complicating efficient enforcement. Countries should therefore consolidate their legal digital standards to avoid a disparate approach to the spread of disinformation and increase the effectiveness of reactions against disinformation campaigns.
3. Accelerate implementation of the DSA
The DSA requires each EU member state to appoint a Digital Services Coordinator (DSC), but many are yet to comply. Poland has only recently announced a coordination body; in Germany, the DSC office was understaffed during the federal election, which may have contributed towards the AfD’s ability to disseminate harmful information via TikTok.
Effective and consistent enforcement of EU regulations like the DSA is crucial, as these rules are designed to hold big tech companies accountable, to safeguard elections throughout Europe and to protect fundamental rights in the evolving digital landscape.
4. Increase cooperation with Taiwan
Despite the geopolitical threat from China, Taiwan has not banned TikTok. Instead, it has developed rapid-response systems that address disinformation within two hours. But Taiwan is facing obstacles in creating a law similar to the DSA; while the country can show Europe how to react to or even prevent Chinese disinformation attacks, Taiwan can also learn from the European experience with Russian disinformation.
Taiwan is also on the frontline of developing defences against cybersecurity. Such cooperation is especially relevant now, as Taiwan faces funding challenges due to the US foreign aid freeze. Europe should therefore increase cooperation with Taiwanese cybersecurity and disinformation response institutions.
No clear path forward
Only if these four steps fail should the EU consider a full TikTok ban. But this would not be straightforward. In 2028, Statista projects that TikTok will have 285.21 million users in Europe; in Germany alone, 42% of social media users are on TikTok. It is unlikely its audience will go quietly. Any ban also opens up the EU to accusations of censorship, as well as portraying the bloc as being insular and anti-global at a time when it faces an uphill battle to retain the trust of its citizens.
On the other hand, India’s ban on TikTok and other Chinese apps in 2020 demonstrated that users can adapt, shifting to alternatives like Instagram Reels and YouTube Shorts. Other Chinese apps like WeChat also pose national security risks. India, after banning TikTok, proceeded to prohibit dozens more apps on similar grounds. Whatever the EU decides, it should first conduct a comprehensive review of such applications to ensure compliance with EU regulations and to address audience concerns about cybersecurity and censorship.
From a legal standpoint, any ban would need to be well-structured and narrowly focused to avoid creating a precedent for broader digital censorship. Politically, gaining widespread support for such a ban may prove difficult, especially since populist parties—which benefit from TikTok’s reach—are unlikely to lend their vote to such a motion. Any parties which did show support risk alienating their voter base, since young people of all political leanings are present across the platform.
A TikTok ban is a last resort, but the risks posed by disinformation, algorithmic manipulation and foreign surveillance are too great to ignore
*
Europe needs to take proactive steps to safeguard its digital ecosystem. A TikTok ban is a last resort, but the risks posed by disinformation, algorithmic manipulation and foreign surveillance are too great to ignore. Better coordination between EU institutions and national authorities is required, as well as significant increases in the resources of enforcement agencies and regular training for staff handling digital regulation.
The European Council on Foreign Relations does not take collective positions. ECFR publications only represent the views of their individual authors.