- France, Germany, and Italy plan to adopt national and European goals on AI, albeit for different reasons.
- France’s AI strategy appears to be driven by the conviction that, in addition to creating risks, AI presents opportunities for the economy and society.
- In contrast, the German – and, to the extent that one can infer, Italian – approach to AI is markedly driven by fear of losing economic opportunities.
- While the French strategy includes a focus on the military and security elements of AI, its German counterpart almost completely neglects this area.
- The EU appears to be largely uninterested in AI’s geopolitical importance, but its member states can only influence the global development of AI if they act together.
Artificial intelligence (AI) is widely expected to become vital to just about everything. It is changing the economy, affecting the ways in which people shop, communicate, and conduct research. It is difficult to think of activities that the implementation of AI cannot speed up, improve, or otherwise change. Investment bank Goldman Sachs calls AI “a needle-moving technology for the global economy … impacting every corporation, industry, and segment of the economy in time”  AI is not just another technology; it is an enabler that some analysts have likened to the combustion engine or electricity – which caused revolutions no country could afford to disregard.
Governments have increasingly come to share this conviction and are now looking for the best ways to help national economies deal with AI. They are trying to prepare companies, labour markets, and societies for the expected disruption; they aim to facilitate world-class research and provide training that will prepare citizens for the world of the future. Governments’ weapon of choice in this battle? The national AI strategy.
Across Europe, governments are either writing or have already published national strategies on how best to support the development and application of AI. The European Union’s AI strategy, called the “Coordinated Plan on Artificial Intelligence”, asks its member states to put in place national AI strategies or programmes by mid-2019. So far, at least seven EU member states – Belgium, Denmark, France, Finland, Germany, Sweden, and the United Kingdom – and the Nordic-Baltic region have published such strategies. At least ten others – Austria, the Czech Republic, Estonia, Italy, Latvia, Poland, Portugal, Slovenia, Slovakia, and Spain – are in the process of writing them.
But how to formulate such a strategy? What should governments focus on, given that AI promises (or threatens) to change so much? How do states choose where to concentrate their efforts and what to leave out? This policy brief assesses the national AI strategies of France and Germany, as well as Italy’s recently published “Proposals for an Italian AI Strategy” and other official documents that pertain to AI and technology generally, to understand these countries’ priorities on AI. It focuses on the strategies because they are likely to define the states’ approaches to AI.
The paper concentrates on two main issues. Firstly, it examines the role of European cooperation, especially the emphasis the three countries place on working with EU institutions and other member states to tackle the challenges of AI and take advantage of the opportunities it offers.
Understanding these challenges and opportunities is key to shaping the EU’s approach to AI. Seeing its role as one of coordination, the union has chosen the development of “ethical AI” as its main mission in this. But the EU can only play such a role if member states believe that working with their European neighbours, as well as the union’s institutions, will be beneficial to their AI efforts.
Secondly, the paper examines the extent to which their strategies deal with the role of AI in defence and security. AI is widely expected to have a major influence on the future of warfare. The United States and China have already made significant investments in the development of AI-enabled weapons. As this could have major implications for defence capabilities, strategic stability, and geopolitics more broadly, it is important to identify where Europe – whose public debate on military AI has been relatively muted – stands on the issue.
What is AI?
“Artificial intelligence” (AI) is an ill-defined term, not least because its meaning has changed over time. It generally refers to efforts to build computers and machines able to perform actions that one would expect to require human intelligence, such as reasoning and decision-making. At various points in time, there has been a widespread expectation that the emergence of “real AI” – comparable to human intelligence – was imminent. However, after scientists created systems that could perform tasks once thought to be the preserve of humans, AI came to be defined by even more complex tasks. One way to conceive of AI is as a shift from humans telling computers how to act, to computers learning how to act. The creation of systems that are capable of this depends mostly on the application of machine learning techniques – at the moment, particularly so-called “deep learning” and neural networks.
Currently, most AI applications are “narrow”, focusing on one specific task rather than “general”, capable of reproducing human-level intelligence. In fact, most current narrow AI is “brittle”, as it fails when confronted with tasks that even slightly differ from its training.
Given the various possible applications of AI, it is misleading to think of AI as a stand-alone capability or one application of technology. Instead, it is more helpful to view AI as an enabler and thus speak of systems such as “AI-enabled financial trading” or “AI-enabled cyber defences”.
Three countries, three approaches: How to write a national AI strategy
France has shown considerable interest in AI in recent years. The country has made the technology a priority at the highest level, with President Emmanuel Macron discussing the topic at length in a widely read Wired interview in 2018 (two years after Barack Obama, then US president, participated in a similar interview with the magazine). In March 2017, the French education and research ministry published “France AI”, a report that included recommendations from expert working groups. The document was followed a year later by 152-page strategy “For a Meaningful Artificial Intelligence – Towards a French and European Strategy”, published simultaneously in French and English.
While it was commissioned by the government, the French strategy was written by a team led by Cédric Villani, one of France’s AI stars and a member of parliament for La République en Marche! As a result, the strategy is more ambitious than a governmental or ministerial publication would have been. It is also more targeted: the report explicitly aims to avoid spreading state efforts too thinly, identifying four sectors in which French industry “can seriously envisage playing a leading role at [the] global level and compete with non-European giants” and in which public money should be spent. These sectors are health, transport and mobility, the environment, and defence and security.
The strategy is detailed and coherent in strengthening French and European AI efforts. It focuses on the availability of data for AI, arguing that the authorities should make data “a common good” by granting researchers access to information from government-funded projects and by incentivising private companies to publish their data.
The strategy states that French academic research is at the forefront of global developments in mathematics and AI, but that the country’s scientific progress “does not always translate into concrete industrial and economic applications”. It recommends tripling the number of people trained in AI in France in the next three years. To achieve this, it also advocates for “hefty salary top-ups” and a considerable reduction in related administrative tasks in research institutions and universities. The strategy also discusses AI’s impact on the labour market, the development of ecological AI, and ethics and diversity issues in AI. It recommends that France create a digital technology and AI ethics committee that is open to society and promotes public education.
The strategy has a generally upbeat tone; while it does not neglect the dangers of AI, it calls it “one of the most fascinating scientific endeavors of our time” and notes that “in recent years, artificial intelligence has entered a new era, which gives rise to many hopes”. Villani, in his foreword to the strategy, speaks about his personal enthusiasm for AI and expresses its authors’ conviction that “France – and Europe as a whole – must act synergistically, with confidence and determination, to become part of the emerging AI revolution”.
Although it was not written by policymakers, the strategy appears to have been drafted with a geopolitical mindset. Its authors reference Russian president Vladimir Putin’s now famous statement that “whoever becomes the leader in this sphere will become the ruler of the world” and comment on the strategic importance of AI. And it is on the geopolitical level that the strategy sounds the alarm, adding that “France and Europe can already be regarded as ‘cybercolonies’ in many aspects”.
For Germany, last year was also important. Having been criticised as slow to address AI issues, the German authorities noticeably sped up their activities in the area in 2018. In June, the Bundestag put together the Enquete-Kommission, a committee of enquiry of MPs and experts. The following month, the government published the “cornerstones of its AI strategy”; in August, it created a digital council to advise it. This was followed by the publication of the German AI strategy in November. Throughout the year, the government, ministries, and other private and public actors held public conferences, online consultations, and expert hearings on AI.
With its 47-page “National Strategy for Artificial Intelligence – AI made in Germany”, Berlin followed a different approach to Paris. Instead of being an expert publication, the document is the result of a ministry-wide consultation under the leadership of the ministries for education and research, economy and energy, and labour and social issues. Accordingly, the strategy primarily focuses on research, the economy, and society.
The document views AI primarily through an economic lens. It concentrates on preserving the strength of German industry – particularly small to medium-sized companies, the famous Mittelstand – by ensuring that AI will not allow other countries to overtake Germany economically. The government’s hope is that AI will help the Mittelstand continue to manufacture world-leading products. To achieve this, the strategy lays out three priorities:
- Strengthen AI research in Germany by, inter alia, creating at least 100 additional professorial posts.
- Advance the use of AI in industry, particularly the Mittelstand.
- Promote the responsible development and use of AI, guaranteeing social and individual rights.
The strategy also mentions AI’s impact on the climate, public administration, and the health sector. The German government sees “ethical and legal requirements” as an integral part, and the future “trademark”, of AI made in Germany. In this regard, Berlin follows the European Commission’s declared aim for Europe to become a leader in ethical AI.
The general tone of the strategy is intriguing in that it fluctuates between grand ambitions and defensiveness. Its proclaimed aim is for Germany to become “a world-leading place for AI” and for the label “AI made in Germany” to become a “globally recognised seal of quality”. However, the strategy’s defensive tone comes through in references to the need to safeguard future competitiveness. Overall, whereas the French strategy emphasises opportunities, the German one focuses on maintaining the status quo.
Although it has not yet published a national AI strategy, Italy made its first attempts to think about new technological challenges more broadly in its digital agenda for 2008, the “Digital Growth Strategy 2014-2020”, and its plan for implementing high-speed broadband. Although they do not concentrate on AI, these documents provide useful insights into the country’s attitude towards technological disruption. In particular, they acknowledge that there has been a substantial delay in Italy’s adjustment to technological development in Europe, pointing to the need for special support for the digital transformation of the country. The “Digital Growth Strategy 2014-2020” notes that cultural reticence, the fragmentation of initiatives across regions, and the mismanagement of resources are the main causes of the delay.
Judging by these documents, the Italian approach to AI is likely to focus on education and training, public services, and the economy. The government has already paid special attention to AI in public services. A white paper entitled “Artificial Intelligence: At the Service of Citizens” – published in March 2018, and scheduled to be updated every three years – is the most significant AI-related document the Italian government has produced thus far. Written by an AI task force within the Agency for Digital Italy (a technical body within the Presidency of the Council of Ministers tasked with ensuring the implementation of the objectives laid out in the EU’s “Digital Agenda for Europe”), the paper lays out the challenges of integrating AI into government services. These include concerns over data ethics, the availability of skilled employees, and legal considerations. The paper recommends ten measures, including a national platform for developing AI, an effort to facilitate the dissemination of AI-related skills, and the evaluation and verification of AI algorithms. It calls for public investment in AI and collaboration between research institutes, business accelerators, and innovation hubs at the national and European levels to promote the application of AI in the public sector.
The European Commission’s April 2018 call for the development of national approaches to AI prompted the Italian government to step up work on its national strategy, which is due to be published later this year. To this end, the Ministry of Economic Development has created a task force that brings together experts from the private sector, research institutions, and civil society. The ministry has formulated several preliminary aims for the strategy:
- Improve, coordinate, and strengthen AI research.
- Promote public and private investment in AI, including that from EU funds.
- Attract talent and develop businesses in AI.
- Encourage the development of the data economy, paying particular attention to the dissemination and correct use of non-personal data, and the improvement of interoperability and cyber security standards.
- Comprehensively review the legal framework around AI, especially in relation to the safety and responsibility issues raised by AI-based products and services.
- Analyse and evaluate the socio-economic impact of AI-based systems.
In May 2019, the group of experts completed a draft of “Proposals for an Italian AI strategy”, focusing on five main work strands in line with the European Commission’s guidance: research, technology transfer, and industry; education, training, and skills; data valorisation; ethics and regulation; and the enhancement of public administration. Specifically, the task force proposes the vertical application of AI by identifying five areas on which Italy should focus its investments to capitalise on national expertise: the internet of things, manufacturing, and robotics; services such as finance, healthcare, and creativity; transportation, agrifood, and energy; public administration; and culture and digital humanities. The document aligns with the approach to AI and technology expressed in the “Digital Growth Strategy 2014-2020” and its plan for implementing high-speed broadband. The draft depicts AI as an opportunity for a national renaissance in the economic, social, and environmental domains – with the ultimate objective of fostering sustainable development and serving the common good. The document, which makes 79 policy recommendations, is now being reviewed by experts at the Ministry of Economic Development.
Partners and competitors: European cooperation on AI
The EU hopes that its coordination of national AI initiatives will create a coherent framework for investment and legislation among its member states. As discussed above, the union has prioritised the development of ethical AI.
In April 2018, EU member states signed the “Declaration of Cooperation on Artificial Intelligence”, pledging to collaborate with one another in addressing social, economic, legal, and ethical questions related to AI, as well as to ensure that the EU becomes competitive in the area. The European Commission issued its “Communication on Artificial Intelligence for Europe” the same month. This document outlines three pillars of work: promoting economic and industrial capabilities and research; addressing the socio-economic changes brought about by AI, particularly in the areas of education and training, as well as labour market reform; and creating an ethical and legal framework for AI, with a view to the development of ethics guidelines.
Following its publication in December 2018 of the “Coordinated Plan on Artificial Intelligence”, the European Commission appointed 52 experts to the High-Level Expert Group on Artificial Intelligence. The group published a document entitled “Ethics Guidelines for Trustworthy AI” in April 2019. The fact that several EU member states had already published their national AI strategies by the time the EU’s “Coordinated Plan” became public may have undermined its impact somewhat. However, given the pace of technological development, there will be a continuous policy process in which member states further develop and update their strategies, and gradually integrate EU recommendations. But for the EU to play this coordination role, member states will need to accept its value in this – and to view working with European partners as beneficial to their AI efforts.
The EU’s “Coordinated Plan on Artificial Intelligence”
The main aims of the EU’s “Coordinated Plan on Artificial Intelligence” are to:
- Boost investment and promote excellence in AI applications that are trustworthy and “ethical and secure by design”.
- Develop and implement agendas for collaboration on research, development, and innovation shared between industry and academia.
- Adopt learning and training programmes and systems.
- Ensure that public administrations in Europe are global leaders in the use of AI.
- Implement, with the guidance of experts, clear ethical guidelines on the development and use of the technology, to set global ethical standards and position Europe as a global leader in trusted “ethical AI”.
France, Germany, and Italy all support bilateral or multilateral cooperation and the simultaneous adoption of national and European goals. Yet their stated motivations for doing so differ. In the German case, the focus on European – specifically, Franco-German – cooperation, sometimes appears to be a goal in itself. The French strategy, in contrast, adopts a more pragmatic approach – supporting European cooperation only in areas in which the strategy’s authors consider it to be useful. (It is unclear whether Italy’s national AI strategy will name specific partners.)
Although Villani notes in its foreword that “we cannot conceive AI in a purely national framework”, the French strategy conceives of European cooperation on AI as useful, but not at all costs – and of Franco-German collaboration as the starting point of the process. The document’s foundation in geopolitical concerns can be seen in its argument that “France and Europe need to ensure that their voices are heard and must do their utmost to remain independent. But there is a lot of competition: the United States and China are at the forefront of this technology and their investments far exceed those made in Europe.”
It is this geopolitical awareness that makes France push for a European rather than exclusively national approach: “it is essential that [France and Europe] resist all forms of determinism by proposing a coordinated response at [the] European level”. The French strategy approaches European cooperation in an equally practical way. It notes, for example, that the transport and mobility sector “is particularly well suited to integration into a European scheme” as “one of Europe’s longstanding strengths”. The strategy adds that “the other priority sectors (health, defense and environment) do not lend themselves so easily to direct treatment at European level, although it would be useful to get Germany involved.”
Although the strategy includes a chapter on the ethics of AI, European cooperation does not appear to be a primary concern in this. Nonetheless, the authors note that, “at the European level, certain [sic] already imagine a network of national ethical committees, modelled on the ‘G29 network’ (network of Data Protection Authorities) … All these efforts must be encouraged.”
The strategy mentions Germany, as France’s obvious main partner, multiple times. In one instance, it recommends drawing inspiration from Germany’s establishment of an ethics commission on driverless cars. The strategy also states that, “to start on [the] development of a European industrial policy on AI, our mission recommends that, initially, work should be carried out within a Franco-German axis.” It continues: “Italy (the north in particular) should also be seen as a possible serious partner, all the more so because of its advances in the field of robotics” – an area in which it goes so far as to speak of a “Franco-German-Italian triptych”. Yet, though it mentions Italy several times, the strategy usually does so in relation to Germany, referring to initiatives such as “a Franco-German axis complemented by a partnership with Italy”.
While the German strategy features the subheading “AI made in Germany”, it does not adopt a nationalist approach. It emphasises the German government’s aim to work with groups such as the G7 and the G20, and states such as the US, Canada, Israel, and “certain Asian countries”. Most importantly, the strategy has a European, particularly Franco-German, focus. It uses the terms “European”, “EU”, and “Europe” around 90 times, stating that its main aim is to make Germany and Europe world leaders in AI. Specifically, the strategy calls for European research cooperation with a five-year goal of establishing a “European AI innovation cluster”. The German strategy also supports European efforts to develop ethical AI, identifying these as important in several instances.
The strategy mentions no other country as often as France, with which Germany plans to build a “virtual centre” of research and innovation institutes. Germany also wants to work on AI with the French Council on Innovation.
Although it names several partners and is generally enthusiastic about European cooperation, the German strategy does not appear to do so due to geopolitical considerations. It does not seem to see the technology as an element of geopolitical power and important to German influence in the world.
For Italy, the EU is also likely to be an important reference point for its national AI strategy, especially in relation to finance and regulation. So are strong links to both France and Germany, particularly in the digitisation of their manufacturing sectors. Indeed, at the G7 Multistakeholder Conference on Artificial Intelligence held in Montreal in December 2018, Italy and France presented a joint paper entitled “The future of work – skills for the modern economy”. In this regard, it is interesting that “Proposals for an Italian AI Strategy” recommends that Italy privilege EU initiatives over Franco-German ones such as the Joint European Disruptive Initiative. More generally, the document recommends that the country play a central role at the EU level and avoid accelerating the international dialogue on the topic before having agreed on a common position at this level.
Nonetheless, some of Italy’s initiatives suggest that it is also enhancing its relationships with actors outside Europe. For example, within the framework of a bilateral agreement on industrial, scientific, and technological cooperation, Italy and Israel aim to create a joint lab on research and technological innovation that will cover AI. Moreover, in March 2019, Rome signed three memoranda of understanding with Beijing that focused on the Belt and Road Initiative, e-commerce, and start-ups respectively.
Don’t mention the war: The role of the military in national AI strategies
The impact of AI on warfare has become a hotly debated and closely studied topic among military professionals and strategists. As is the case in the civilian sphere, AI can have a variety of applications in military affairs, including in intelligence analysis, logistics, training, strategic foresight, and weapons systems. Military experts are particularly interested in AI-enabled autonomy.
Military AI and autonomy
Artificial intelligence (AI) and autonomy are not the same, but they are closely related and often mentioned together when discussing AI in a military context. In this context, “AI” denotes a system’s ability to determine the best course of action to achieve its goals, and “autonomy” the freedom a system has to accomplish its goals. According to the United Nations Institute for Disarmament Research, “the rapidly advancing field of AI and machine learning has significant implications for the role of autonomy in weapon systems. More intelligence machines are capable of taking on more challenging tasks in more complex environments.”
Thus, AI can enable autonomy because more intelligent systems can be given more freedom to act. From a military perspective, there are five potential advantages to AI-enabled autonomous weapons:
Speed: Machines can process large amounts of data, and take decisions based on it, faster than humans can. Warfare is speeding up so much that it may become too fast-paced for humans.
Stealth: AI-enabled autonomy can help make military systems harder for enemy defences to detect. Today, drones and other unmanned systems’ reliance on communication links for guidance can reveal their location to adversaries. Systems that have the autonomy to operate without these links could be far harder to detect.
Precision, efficiency, and effectiveness: AI is unhindered by human emotions in the heat of battle. Proponents of AI hope that this could help to, for instance, minimise civilian casualties.
New capabilities: AI-enabled autonomy is crucial to the development of capabilities that could produce military breakthroughs. For instance, AI would be required to coordinate the actions of all units in a drone swarm.
Reduced reliance on humans: Autonomous systems promise to reduce the need for human soldiers, thereby keeping them out of harm’s way. Similarly, AI-enabled systems could reduce the physical or cognitive strain on soldiers.
In the US and China, the debate on how AI will change – and perhaps even revolutionise – warfare and military organisation is at an all-time high. However, there has been considerably less research on European militaries’ efforts in AI. Although there are several European projects to develop AI-enabled technology (see box), Europe’s political and strategic debate on AI-enabled military technology is underdeveloped. Arms control is the only area in which European governments seem willing to publicly engage with AI-enabled autonomy.
It is, of course, a good idea to discuss how to regulate new military capabilities. AI-enabled weapons systems pose many challenges. On the technical side, AI systems are “black boxes” that are not just programmed but have learnt. This means that it is impossible for a human to fully comprehend the computer’s reasoning – which is particularly concerning in systems that could take life-and-death decisions.
Thus, it is considerably harder for humans to anticipate or detect mistakes made by AI systems – including ones caused by adversarial action such as sabotage – than those made by traditional computers. Societies that depend on AI will need to understand this. Such mistakes can result from biases that AI systems develop if they have been trained using biased data. For example, facial recognition systems used by police forces have identified a disproportionately high number of innocent people with relatively dark skin as criminals. Similarly, algorithms developed to help human resources departments with hiring have severely discriminated against women. These biases are likely to have been not intentional but caused by human biases expressed in the data. Nonetheless, hostile actors can exploit these flaws in a way that is difficult to detect. Hence, AI-enabled weapons have vulnerabilities of a type that is unfamiliar to military planners and commanders.
Political leaders understandably fear that the development of AI-enabled lethal autonomous weapons could spark an arms race. Due to the speed with which autonomous weapons may be able to act, the first state to use them would gain a significant military advantage. Thus, there is a real danger that such systems could lead to unchecked escalation: once one country starts to use them, others might feel they have to follow, leading to an armaments spiral. There is also a danger that, following unexpected events, AI-enabled autonomy could spark “flash wars” that mirror “flash crashes” on stock markets – in which hundreds of billions of dollars have been wiped off share prices faster than humans can react.
Non-governmental organisations that aim to ban lethal autonomous weapons – such as the Campaign to Stop Killer Robots, which was instrumental in starting discussions on these systems within the framework of the Convention on Certain Conventional Weapons (CCW) at the United Nations – tend to focus on the ethical challenges of AI-enabled weapons systems. They are particularly concerned about the unique problems presented by systems that allow humans to transfer responsibility for life and death decisions to machines. AI could allow for the creation of weapons that completely remove humans from the decision to kill. Many ethicists argue that, as machines are unable to appreciate the value of a human life and the significance of its loss, allowing such systems to kill would violate the principles of humanity. As political scientist Frank Sauer notes, “a society that no longer puts the strain of killing in war on its collective human conscience risks nothing less than the abandonment of one of the most fundamental civilisational values and humanitarian principles”.
European and other decision-makers should take these concerns seriously. Yet it is surprising that several European governments have exclusively focused on arms control. This is particularly astounding given that, as discussed above, several European projects currently in development include AI-enabled technology. Therefore, many European states seem to be engaged in two parallel discussions on AI. While the public debate on AI in Europe (particularly Germany) focuses on arms control, European defence companies are debating how to integrate AI into their systems. Defence experts Wendy Anderson and Jim Townsend argue: “there seems to be a two-speed discussion going on. European defense industry officials we talked to had no qualms about harnessing AI to reduce manufacturing costs and improve customer satisfaction. But governments and institutions like NATO and the EU were having a harder time. Will AI’s impact on society — say, in data privacy — be feared and, hence, regulated? Can it be ‘purchased’ for national defense or domestic use, and how much would this cost a tight-fisted government? Could it, perhaps, simply be ignored?” The national AI strategies of France, Germany, and Italy approach this topic in very different ways.
Selected AI-enabled European military systems in development
BAE Taranis (UK): The BAE Taranis is an armed drone system currently in the demonstrator and testing phase. Judging by publicly available information, it appears to be a highly autonomous system that is able to take off, land, and identify targets without human intervention. With its small radar profile, Taranis has conducted automated searches in trials, locating and identifying the targets according to its assignments. However, after a series of tests and trials that had positive results between 2013 and 2015, the development of the system appears to have stalled.
Dassault nEUROn (France, with Greece, Italy, Spain, Sweden, and Switzerland): nEUROn is an unmanned combat air vehicle similar to Taranis. It has performed naval tests with France’s aircraft carrier. Recently, its demonstrator carried out a series of low observability tests. The Global Security Initiative’s autonomy database ranks nEUROn as the most autonomous system among the 283 it has analysed.
Airbus and Dassault Future Combat Air System, or FCAS (France, Germany, and Spain): The Future Combat Air System will involve teaming between a manned fighter and swarms of autonomous drones. Other elements of the system will include hypersonic weapons and augmented reality. It is in an early stage of development.
BAE Tempest (UK, with Italy): Tempest is a sixth-generation aircraft in an early stage of development. Planned for 2035, Tempest will be designed to include many new technologies.
As discussed above, France views defence and security as one of the four AI priority sectors for industrial policy. (One of the authors of the strategy is an engineer who works for the French defence procurement agency.) The strategy notes that, in the future, “law enforcement agencies could … rely on algorithms to manage the deployment of their patrol units and armies could use LAWS (Lethal Autonomous Weapons Systems) in operational theatres abroad”. It takes on the issue of lethal autonomous weapons in the section on ethics, calling LAWS “one of the greatest concerns regarding developments in AI”.
The strategy emphasises that France is engaged in discussions on LAWS at the UN. However, the country has been unwilling to support calls for a ban on these systems. The paragraph on LAWS in the French AI strategy shows the tightrope that France is trying to walk: “Research into performance is, however, a necessity when we are confronted by increasingly capable competitors and increasingly complex tactical situations involving increasingly sophisticated systems. From a French point of view it is, however, possible to be a driving force behind proposed regulations or the development of good practices without having to forego [sic] advanced capabilities ex ante or fall behind other States in this important strategic domain.” In line with this, the French Ministry of Defence announced in early 2018 that it plans to invest €100 million per year in AI research.
The foreign policy and defence elements of AI are markedly absent from the German AI strategy – despite the fact that the document is otherwise very comprehensive, listing almost all other areas that the technology is likely to affect. The lack of focus on foreign policy and defence was also noted by members of the committee of inquiry. Most notably, the strategy includes only one sentence that refers to security and defence, which shifts all responsibility for this area to the Ministry of Defence: “with regard to new threat scenarios for internal and external security, in addition to research on civil security, the Federal Government will also promote research to detect manipulated or automatically generated content in the context of cyber security. The research on AI applications, in particular for the protection of external security and for military purposes, will be carried out within the scope of the departmental responsibilities.”
The absence of defence elements from the strategy is likely due to the fact that, while the Ministry of Defence was consulted in the process (as was the Foreign Office), it did not have a leading role in formulating the strategy. Although the aforementioned quote could be interpreted as giving the Ministry of Defence a mandate to develop its own strategy on the military applications of AI, it is unlikely that the ministry will do so publicly.
Meanwhile, although Italy’s AI efforts seem to focus on socio-economic considerations for now, the Italian Ministry of Defence has published documents that cover the security and defence aspects of the technology. For instance, Defence Minister Elisabetta Trenta mentions AI in her assessments of Italy’s geopolitical environment and underlines the need for the military to adapt to a wide range of changes, particularly technological developments. When the Italian government brings together the various dimensions of AI within a coherent strategic framework, it may do so by replicating the interdisciplinary approach it has taken to the cyber domain. In discussions on LAWS at the UN, Italy has stressed the importance of retaining human control over weapon systems and expressed an interest in adopting the Franco-German compromise position of issuing a political declaration (rather than supporting an official ban on autonomous weapons).
A strategic cacophony on AI?
Differing organisational approaches
The differing ways in which France, Germany, and Italy have drafted their AI strategies has had a direct impact on the documents’ tone and focus. Germany’s allocation of responsibility for the strategy to three ministries (in consultation with all ministries) has led to a cautious, somewhat defensive approach that focuses on research, the economy, and society. France’s reliance on experts has resulted in a more focused and detailed strategy that has a relatively upbeat tone. Italy appears to aim to follow a middle way that involves experts under ministerial guidance – a potentially promising, level-headed approach.
Risk or opportunity?
The German – and, to the extent that one can infer, Italian – approach to AI is markedly driven by fear of losing economic opportunities. Germany appears to be particularly motivated by a desire to preserve its favourable economic position, causing it to adopt a defensive mindset. The French strategy, in contrast, appears to be driven by the conviction that, in addition to creating risks, AI can present opportunities for the economy, society, and France more broadly.
A geopolitical mindset?
It is striking that, of the three countries, only France appears to view AI as a factor in geopolitical competition. Parts of the French strategy focus on the military and security elements of the technology, which its German counterpart almost completely neglects. The EU appears to be largely uninterested in AI’s geopolitical importance – although the European External Action Service (EEAS) has set up the Global Tech Panel, a group of experts who work on the relationship between technology and the Common Foreign and Security Policy. However, as the panel meets in private, there is little publicly available information on its work.
European or national?
All three countries plant to adopt national and European goals on AI, albeit for different reasons. Whereas the German focus on European and Franco-German cooperation sometimes appears to be a goal in itself, the French strategy is more pragmatic, advocating European cooperation only in areas in which France considers it to be useful. France sees Germany as its most important partner in AI, although Italy comes a close second. It is less clear who Italy will identify as its main partners, although the country is likely to maintain a European focus in this.
If the EU seriously intends to coordinate AI efforts in Europe, it should do more to establish itself in the role. Most importantly, the EU will need to argue more forcefully that cooperation on the European level will benefit its member states. The most obvious argument in favour of cooperation is geopolitical; only together are European states big and strong enough to have an influence on the global development of AI. The EU could also make an argument that, by cooperating with one another, European states could help fight a rise in AI nationalism. Here, the EEAS and ministries of foreign affairs can be important, ideally in cooperating with ministries of defence and the private sector. Communication between European private firms (such as arms manufacturers) and state institutions needs improvement.
The EU’s focus on ethical AI, and its effort to turn this into a European brand, is valuable. And the High-Level Expert Group on Artificial Intelligence’s recent publication of the “Ethics Guidelines for Trustworthy AI” is a step towards this goal. But the EU will only lead the world in this area if it significantly strengthens its AI capabilities in general.
About the authors
Paola Sartori worked on this publication as a researcher at Istituto Affari Internazionali and a pan-European fellow at ECFR. Since 1 June 2019, she has been a policy officer for industry engagement at the European Defence Agency.
Ulrike Franke is a policy fellow at ECFR.
 GS Research Unit, “Profiles in Innovation: Artificial Intelligence: AI, Machine Learning and Data Fuels the Future of Productivity”, 14 November 2016, p. 3.
The European Council on Foreign Relations does not take collective positions. ECFR publications only represent the views of their individual authors.