I was recently asked to write a piece for “The Conversation”, the newly-launched (in the UK) internet journalism project featuring content written by academics. The peg was the recently published report by UN Special Rapporteur on Extrajudicial, Summary, or Arbitrary Executions, Christof Heyns, on “lethal autonomous robotics and the protection of life”. My piece duly appeared under the title “Robots don’t kill people, it’s the humans we should worry about”, a nice title chosen by the sub-editor at “The Conversation” and which reflected well the main point I was trying to make, namely that machines will only do what humans programme them to do. Naturally, this is only part of the story.
18 months ago we started a project at cii in the University of Surrey examining how new capabilities are reshaping international intervention. We called it “Hitting the Target?” Following an initial workshop at the University of Surrey in July 2012 (see previous blog post in this series), in March of this year we published a report with the same title jointly with RUSI, the Royal United Services Institute. Although we initially conceived the subject in fairly broad terms perhaps inevitably we ended up focusing on the use of ‘drones’ in targeted killings (NB this was therefore about remote, rather than autonomous, weapons).
Throughout this project I have been struck by two things. First, the public and media obsession with the technology and what it can do as opposed to the wider foreign policy context within which it is used – a point Adrian Johnson and I tried to highlight in the “Hitting the Target?” report. Second the danger – perhaps particularly for us academics – of discussing these matters in such a dry, technical, way that we lose sight of the wider moral and political issues involved. My article in “The Conversation” might be accused of falling into that trap, so in this post I want to acknowledge that there may well be reasons for imposing limits on the production and use of “lethal autonomous robotics” that flow from wider ethical and political concerns rather than on a more limited analysis of what international law allows.
Opponents of the use of ‘drones’ advance a number of arguments against their use, for example that overriding technological superiority lowers the threshold for the powerful to wage war on those unable to defend themselves; that the remoteness of this kind of warfare somehow dehumanises it and makes it less likely that international humanitarian law will be maintained; that a policy of targeted killings may produce such negative reactions among populations ‘on the receiving end’ that it becomes counterproductive and reduces, rather than enhances, security. (See, for example, the 2012 Stanford/New York Living Under Drones report.) I want here to highlight two particular arguments that could be applied equally well to autonomous as to remote weapons systems: one based on the concept of non-proliferation and one on the inherent weakness of the kind of targeting methods employed in so-called ‘precision strikes’.
Although I was happy with the title given to my article in “The Conversation” I was also aware that the argument that it is people – rather than machines – who kill other people is uncomfortably close to the arguments used by those in the US and elsewhere who argue against tighter gun controls. What these arguments manifestly fail to acknowledge is the danger of weapons falling into the wrong hands and being used in an unacceptable way. Given this danger and the difficulty of discerning in advance who may be an unsafe person to possess a weapon, other societies such as the UK have decided that the ownership of guns must be much more tightly controlled. This highlights another dimension of much of the debate about ‘drones’, which is the assumption that only ‘we’ (i.e. the powerful states of the West) possess them. The discourse is therefore framed in terms of the ethics and legality of what ‘we’ do. In a scenario where such capabilities were in the hands of ‘our’ enemies – and in particular of certain non-state actors such as Al-Qaeda – the terms of the debate would change dramatically. ‘We’ are only for proliferation as long as the capability stays within ‘our’ family.
The second argument that I want to outline takes as its starting point that even in ‘our’ hands the use of such weapons is a lot less safe than we claim. This is because our understanding of the world in which we operate is much less sophisticated than we imagine it to be. Instead of putting a premium on detailed contextual analysis of different societies and how they function we rely increasingly on the power of computers and the so-called ‘big data’ approach to tell us where threats are to be found. The concept of “signature strikes”, based on a “pattern of life analysis” and targeting “groups of men who bear certain signatures, or defining characteristics associated with terrorist activity, but whose identities aren’t known” (Living Under Drones p12) is the most egregious example of this. As the author of “Bad Science” in The Guardian, Ben Goldacre, has argued the weakness of this approach is that “your risk of false positives increases to unworkably high levels as the outcome you are trying predict becomes rarer in the population you are examining”. In other words, using patterns – of behaviour, socio/ethnic/religious profile, or whatever – to predict who poses a threat and who is therefore a ‘legitimate’ target is an inherently unsound method. ‘Signature strikes’ could only conceivably be justified on the basis that a very high proportion of a population was made up of terrorists, something that most people would find very implausible indeed.
So, although I find it strange that we vest in a piece of machinery the moral blame that truly belongs to us humans, I also worry that our faith in technology and the power of numbers is leading us down a dangerous path. For that reason I believe Christof Heyns is right to argue that there should be a moratorium on the further development of “lethal autonomous robotics” until such time as an internationally-agreed framework for their use can be agreed. In the end “Hitting the Target” may turn out to be rather less straightforward than some would like it to be.
—
Sir Michael Aaronson is a Professorial Research Fellow and Co-Director of cii – the Centre for International Intervention – at the University of Surrey in Guildford in the UK. Contact via: m.aaronson@surrey.ac.uk @MikeAaronson; @cii_surrey
Further Reading on E-International Relations
- Opinion – Double Standards and Media Bias in Israel’s War on Gaza
- Introducing Guiding Principles for the Development and Use of Lethal Autonomous Weapon Systems
- A Critique of the Canberra Guiding Principles on Lethal Autonomous Weapon Systems
- Why People Need the Dark Web
- The “Drone” Lexicon
- Historical and Contemporary Reflections on Lethal Autonomous Weapons Systems