Out of the slow lane: How Europe can meet the challenge of AI

EU member states need to turn their declarations of intent about international cooperation on technological sovereignty into real projects.

According to Kai-Fu Li, former president of Google China, Europe has little chance of winning even the “bronze medal” in the global race to develop artificial intelligence (AI). Although this sounds like a harsh judgment, there seems to be widespread agreement among analysts, commentators, and policymakers that Europe is missing the boat on technological innovation in general and AI in particular. Excessive regulation, a business environment ill-suited to start-ups, a lack of investment – the list of grievances is long. But, while these concerns are not completely unfounded, they are somewhat self-flagellating. “Europeans like to tell [themselves] how much behind they are,” one participant in a recent ECFR workshop on AI in Europe noted. Europe, on closer inspection, does not face insurmountable hurdles in harnessing AI – and, increasingly, both the European Union and its member states are taking on the challenge.

In this regard, 2019 was a crucial year. Many European governments drafted strategies on how to support AI development and harness its benefits for their countries. From Sweden to France, Estonia to Italy, governments published national AI strategies that discussed AI-related challenges related to research, data privacy, employment, and public education. Although national priorities diverge on some points, there is widespread agreement in Europe that AI will be one of the decisive technological developments in the future – one that no European country can afford to miss. Accordingly, states have put in place a variety of new funding streams, and created expert and working groups, to help with the effort.

The EU has identified trustworthy, ethical AI as its primary focus

The EU has played an important role in this, publishing an AI strategy in 2018 and a Coordinated Plan on Artificial Intelligence in early 2019. In the latter, the EU encourages member states to engage with the topic more and formulate strategies, while aiming to ensure that they work together. Importantly, the EU has identified trustworthy, ethical AI as its primary focus. In addition, Ursula von der Leyen has declared that, with her as its president, the European Commission will be “geopolitical”, working towards increased European “sovereignty” in various sectors. The von der Leyen Commission will particularly focus on enhancing “Europe’s technological sovereignty”, as she set out in the mission letter for the commissioner for the internal market.

The key challenges in 2020 will be for member states to make good on their rhetoric and for the EU to harmonise various national approaches, thereby turning declarations of intent about international cooperation into real projects. The EU’s plans to become a world leader in ethical AI, and to create appropriate regulation for firms operating in Europe, will be challenging to implement. It is likely that, the moment the debate goes beyond statements, it will become harder to conclude agreements between different European states. One can expect this to be the case regarding digital commissioner Margrethe Vestager’s plans for increased regulation of (predominantly US) technology firms. Previous EU measures such as the General Data Protection Regulation (GDPR) showed that the EU can have a global reach – but they also demonstrated how well-intentioned policies could have unintended consequences, as the GDPR benefited large – American – companies. There will almost certainly be intense debates about these issues in Europe.

Finally, in this context, there is a risk of the emergence of AI nationalism – meaning that, due to their concerns about safeguarding national champions, states could adopt protectionist measures. The EU will need to engage with these concerns.

Another challenge pertains to new technologies in the military realm. Initially, Europe’s debate on using AI in the armed forces was muted compared to those in the United States and other countries. Stakeholders in Europe, it seems, are concerned that heated public campaigns against “killer robots” (lethal autonomous weapons systems) could have a negative impact on their image if they publicly discussed AI for military purposes. As 2019 progressed and work on European common defence projects such as Permanent Structured Cooperation (PESCO) and the European Defence Fund became more concrete, however, one could observe a slowly growing willingness in Europe to engage somewhat more openly with the topic of how to use AI in the military, beyond autonomy. This trend was evident among not just politicians but also the armed forces and private companies. Europe already has industrial capabilities in this area, as well as a range of projects in development in various member states. Several of the 34 PESCO projects also include AI elements.

In 2020 these projects will need to advance beyond the definition phase, which will require governments to invest in them. This will, in turn, lead to greater public interest in the matter – and possibly renewed protest.

The European Council on Foreign Relations does not take collective positions. ECFR publications only represent the views of their individual authors.


Senior Policy Fellow

Subscribe to our weekly newsletter

We will store your email address and gather analytics on how you interact with our mailings. You can unsubscribe or opt-out at any time. Find out more in our privacy notice.