Laws on LAWS? Germany’s place in the ‘killer robot’ debate

Germany – and all Europeans – need to get on with agreeing not only the technical rules around lethal autonomous weapons, but also whether we really want to delegate decisions over life and death to machines

Last week, representatives of over 70 states, experts, and activists gathered at the United Nations in Geneva to discuss ‘killer robots’; or ‘lethal autonomous weapons’ (LAWS) as they are more conventionally known. LAWS, though notoriously difficult to define, are weapons that can identify and destroy targets without human intervention. These systems can be airborne unmanned drones, or underwater robots, or missile-defence systems, or, arguably, cyber weapons. They often employ Artificial Intelligence (AI) for decision-making. Intelligent, fully autonomous, lethal Terminator-type systems do not exist yet, but there are hundreds of research programmes around the world aimed at developing at least partly autonomous weapon. Already in use are military robotics systems with automated parts of their decision cycle.

For years, activist groups, from the International Committee for Robot Arms Control (ICRAC) to the Campaign to Stop Killer Robots, have been calling on states to ban LAWS before their development and use can no longer be stopped. After holding three informal meetings of experts, states party to the Convention on Certain Conventional Weapons (CCW) began formal talks on the topic in 2017. Last week’s meeting, the sixth, ended without the breakthrough that activists had hoped for, namely, the move to formal negotiations about a ban. The document issued at the end of the meeting recommended only for non-binding talks to continue.

Germany was one of the countries present in Geneva, but it has struggled to find its position in this debate. LAWS are a problematic topic for Germany as the country has several, partly contradictory, interests in this debate. For one, many German decision-makers continue to see the country as a ‘civilian power’ – a country that largely rejects anything military and solves international conflicts with diplomatic rather than military means. The German public is generally uncomfortable with military and defence topics, despite some recent changes. Accordingly, in the coalition agreement concluded earlier this year, the two governing parties stated that they “reject autonomous weapon systems devoid of human control”, and call for them to be “proscribed around the world.”

In Geneva, Germany adopted a position aimed at pleasing everyone. Namely, it teamed up with France to push for a political declaration, rather than a full ban

At the same time, however, there is a growing realisation within Germany that AI will play an enormous part in all elements of the economy in the future. In late August, a new “digital council” was founded, led by the McKinsey-consultant-turned-political-silver-bullet Katrin Suder, to advise Angela Merkel on this digital change. A month before, the cabinet published a list of main points to guide the upcoming AI strategy, expected in December. Similar developments can be observed around Europe, where countries are hastening to get to grips with the fast technological developments and what they mean for their countries.

But AI is a so-called ‘dual use’ technology, meaning that it can be used for both civilian and military purposes. Some of the countries that are reluctant to support a ban on LAWS are worried that such a move could impact on their ability to research and develop other type of AI, as well as some military AI uses they may be interested in further down the line. The European Union, also present at the talks, reiterated this point of view in its statements in Geneva in 2017 and earlier this year, underlining last week that “our work should not hamper progress in civilian research and development, or innovation in high-technology industries like robotics.” As Germany is still in the process of developing a coherent AI strategy, with the goal of bringing Germany to the forefront of AI research, there might be a worry about limiting their options too early.

Last, Germany, as usual, and even more so since the election of Donald Trump as US president, aims to be a good partner to France and have a French-German position. But France has so far been unwilling to support a full ban. In March, the Campaign to Stop Killer Robots criticised Germany for falling in behind French reluctance to back binding rules for lethal autonomous weapons.

Trying to square this circle of diverging interests, all while still working out its own policies on many AI-related questions, in Geneva, Germany adopted a position aimed at pleasing everyone. Namely, it teamed up with France to push, in a first step, for a political declaration, rather than a full ban. Frank Sauer, a LAWS expert who attended the talks with ICRAC, told me that: “The problem is, that even though this the proposal for a political declaration is in fact a good first step, there is a risk that some countries will join this proposal with the aim of making it the last step of the talks. The meetings could thus end with nothing more than a political declaration – falling short of a ban, which the German government had declared to support in its coalition agreement.”   

Forging a common international agreement will prove difficult. In Geneva, the states simply pledged to continue to explore “options for future work”, falling short of recommending starting formal talks of a ban. After a dynamic start, the CCW talks have stalled, giving fuel to an early fear that states that want to develop LAWS could use the CCW process to play for time and smother the anti-LAWS campaign.

Germany should aim to develop a clearer position – ideally in a European context. Most official German publications on AI leave out the military aspects. The public debate remains underdeveloped, as the German public, the media, and politicians all struggle to have rational debates on defence and security topics, as the German drone debate previously demonstrated, in which emotions trumped rational arguments. But more than almost any other military topic, the use of AI in warfare and the issue of lethal autonomous weapons are topics that deserve a well-informed public. After any technical problems are sorted, the final question when it comes to LAWS is an ethical one: are we as a society, as humankind, willing to delegate decisions over life and death to machines? It is the people that needs to answer this question, and to do so they need to be fully engaged. Debates such as those in Geneva can help to bring the debate forward and inform the public, but are unlikely to be the final step in the matter.

The European Council on Foreign Relations does not take collective positions. ECFR publications only represent the views of their individual authors.

Author

Senior Policy Fellow

Subscribe to our newsletters

Be the first to know about our latest publications, podcasts, events, and job opportunities. Join our community and stay connected!