Autonomous weapons, AI are future of defence but require ethical debate, says expert

An air-defence systems, type 'Patriot'. [EPA/CARSTEN REHDER]

Should a computer be allowed to take decisions over life and death? As artificial intelligence is playing an increasing role, many issues still need to be discussed in parliament and by the general public, Ulrike Franke told EURACTIV about the ongoing autonomous weapons debate and the future of warfare.

Ulrike Franke is a policy fellow in the London office of the European Council on Foreign Relations (ECFR) and part of ECFR’s New European Security Initiative. She works on German and European security and defence, the future of warfare, and the impact of new technologies.

Franke spoke to EURACTIV’s Alexandra Brzozowski.

Looking from a security and defence perspective, what is the military strategy behind warfare with autonomous weapons?

From a military perspective, there are several reasons why there is an interest in autonomous weapons. Number one is speed: Warfare is becoming faster paced. And computers and machines are generally better at making fast decisions than humans, especially when you have a lot of information that needs to be thought through.

The second reason is stealth. At the moment, an unmanned system such as drones needs communication and command links so that the pilot can steer them. The problem is that these links are relatively easy to detect – which means that the system itself is easy to detect. If you want to do a stealthy attack without being seen, autonomous weapons can do that in a way that conventional weapons can not.

UN talks: Can the EU stop the charge of the killer robots?

UN talks continue this week to try and turn the screw on lethal autonomous weapon systems (LAWS) or what has been termed “killer robots”.

The third reason, why companies and armed forces are interested in autonomy and artificial intelligence is that it allows for new military capacities, most importantly swarms. There is no way a human could control every single drone of such a swarm. The drones need to be able to communicate with each other and react to each other. So they need sensors and they need decision-making algorithms – and again you are at the AI and autonomy stage.

How does this change the way wars could be fought in the future?

I very much expect we will see more AI and potentially more autonomous weapons systems used in warfare. That, however, does not necessarily mean a “killer robot” scenario, though I am concerned that we will get into an escalatory logic that could lead towards an arms race. When one party starts getting those systems, others may feel they need to do the same. And it can lead to mistakes and misunderstandings. There could be dangerous chain reactions with computers reacting to each other and potentially causing wars.

When it comes to the lethal autonomous weapon systems (LAWs) debate, are autonomy and intelligence the same thing and how do you distinguish between different types of autonomy?

This is indeed very important because autonomy and AI are always mentioned together. They are linked but are not the same. Intelligence is a system’s ability to determine the best course of action to achieve its goals. Autonomy is the freedom a system has to accomplish its goals – so for instance, less supervision can mean more autonomy. So intelligence allows a system to take decisions in complex environments. Basically, it allows a system to achieve its goals. But autonomy just means that it has the freedom to do so.

And of course, if one wants to have an autonomous system, one wants it to be as intelligent as possible. Because no one wants a dumb system running around with a lot of freedom. But, and this is important, it is absolutely conceivable to have a highly intelligent system which is restrained, that doesn’t have a lot of autonomy.

That is the link that is sometimes a bit blurred. But usually, the more autonomous systems are, the more they rely on artificial intelligence.

We hear a lot about ‘killer robots’ and we speak about possible mistakes. In the end, it is humans that feed these machines with data. What other problems do you see?

Hacking is certainly one of the issues, but I am in particular concerned about biases. The problem is: An artificially intelligent system is only as good as the data it is being fed. And data is almost always man-made – in one way or another. Because it records human behaviour or it is collected, identified and coded by humans. And that means there are almost always biases – often unconscious biases – in data.

The Brief – The wording war against ‘killer robots’

Killer robots may not speak Spanish as well as Terminator. But they could be as destructive as that famous killing machine with an Austrian accent in the not too distant future.

When you look at artificial intelligence in a civilian realm, you have for example American police departments using facial recognition. The American Civil Liberties Union (ACLU) tested these facial recognition systems by matching it with members of Congress. And what they found is that this system was convinced that up to 28 of them were criminals. And the system identified disproportionately people of colour as criminals.

The problem I see is that we as a society have become so used to the computer being always right: Your calculator is always right, whether you ask it do multiply 2 and 2 or whether you calculate the square root of 5756. But AI systems are different, they are not rational and always right in the same way that a programmed computer is. We therefore need to change our thinking, we need to think of AI systems rather as experts who we generally trust and think that they know more than we do, but still question what they say. I think there needs to be a change in our approach.

A European Parliament resolution in July called for “an international ban on weapon systems that lack human control over the use of force”. Last week in Geneva, UN representatives met and discussed the issue of a LAWs ban. There are also calls for an international treaty controlling autonomous weapons. Do you think this is likely to happen?

I have a lot of sympathy for the people calling for a ban, but personally, I am sceptical. I think it is going to be incredibly difficult, if not impossible, to have a ban that includes all types of autonomous weapons.

Firstly, because some autonomous systems are already in use. Second, as I mentioned, there is a military logic behind wanting such systems. And thirdly, I am not even sure whether we necessarily want to ban everything that has some level of autonomy and artificial intelligence, and drawing a line is extremely difficult.

I am also worried that if we ever get to a ban, it will be of something very specific. And we might become complacent after: You celebrate the ban, turn your head – and do not realise that there are many loopholes. This is why I am not convinced of the ban-approach. I am thinking more in terms of norms and rules when it comes to the use of these technologies.

Can you give an example what is currently already in use?

Germany, for instance, uses the Patriot and MANTIS air-defence systems. They are automated to the degree that they shoot down missiles. There is not one person that sits there and presses the button for each incoming missile. Why? Because it would not be fast enough. This is exactly the speed issue I mentioned earlier.

Are there fears that such a ban could hamper innovation?

That is a very important question. Artificial intelligence and autonomy are “dual-use” technologies and an important part of the research is already done in the civilian realm rather than the military realm. I think it is conceivable to have a ban on military applications and still have research going on in other areas, but there has been concern over that.

Artificial Intelligence: The end of the human race?

Artificial Intelligence can bring huge benefits for society. And despite the headlines, fears of AI taking over are nothing more than Holywood fantasy, writes Patrice Caine.

But of course, the problem is that one can take research and use it for military means and it is very difficult to stop anyone from doing so.

A ban would also not include non-state actors or terrorism.

That is true. Then again, a ban could mean that bigger, very sophisticated autonomous platforms won’t be developed and thus could not fall into the hands of non-state actors. But it is already possible to find the relevant code online, and build your own autonomous systems. Non-state actors can use this to their advantage. Usually, if something is available one way or another, someone is going to find a nefarious purpose for.

Why did this debate come up exactly now and how is it likely to develop?

The debate began in 2013, when activist groups, in particular the Campaign to Stop Killer Robots, began campaigning for a ban. Because of the emergence of artificial intelligence as a hot topic today, LAWs have also gotten more interest. This is good, though I am worried that we are not sufficiently talking about the use of AI in surveillance. I am more scared of its use in surveillance technologies than I am immediately scared of LAWs. Simply, because the former is already happening and it already has its impact and makes mistakes on a daily basis.

My main message in all of this is that the public needs to be better informed. The most important questions around LAWs are ethical and moral ones. Should a computer be allowed to take decisions over life and death? This is something we cannot only decide in a parliament, this needs to be coming from the public as well.

Subscribe to our newsletters

Subscribe