The Future of Killer Robots: Are We Really Losing Humanity?

Science-fiction writer Isaac Asimov famously formulated “The Three Laws of Robotics” to protect humans from their artificial creations. They were as follows: (1) “A robot may not injure a human being or, through inaction, allow a human being to come to harm”. (2) “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law”. (3) “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law”. Decades after penning these rules, Asimov’s fiction is fast become fact: only instead of adhering to the Three Laws, robots and autonomous weapons are being engineered to violate them.

The interest in drones has exploded. In newspapers, online, and in academic outlets, commentators are swarming around the drone with an alacrity that reveals the importance many now place on the unmanned technology. This is fuelled by the controversy they generate. The Predator drone is near-synonymous with the U.S.’ program of targeted killings in Pakistan, Somalia, and Yemen. Questions about the legality, ethics, and morality surround what is quickly becoming a permanent, borderless, and remotely-piloted war. Welcome, it seems, to what Derek Gregory terms an “everywhere war”[i], a type of state violence defined by its indiscernible, porous battlespace.

Set against this new geography of assassination is the cultural climate that drone warfare is nested within. Science-fiction films like Terminator are translated into reference points for the future uses and abuse of a technology that is only in its infancy[ii]. Fears surround what drones might be capable of: What if they start to think for themselves? Can drones make ethical decisions? Such alarmism is not entirely misplaced of course. The Predator and Reaper drones that circle in skies today are, in essence, quite primitive aircraft. They require a vast team of human operators and analysts to keep them in the air—for example 168 people are needed to keep a Predator aloft for 24 hours. And as it turns out, they crash quite frequently too. So if we are only at the beginning of what I have elsewhere called our brave new Droneworld; one that is reminiscent to the first flight of the Wright Brothers’ craft in 1903, then we are indeed faced by an armada of questions surrounding its future.

Losing Humanity: The Case Against Killer Robots

This unknown future is brought to life in a sobering report released by Human Rights Watch and the Harvard Law School International Human Rights Clinic. Titled “Losing Humanity: The Case against Killer Robots”, the 50-page document is something of a watershed moment. It’s the first publication by a nongovernmental organization about the legality and ethics of “fully autonomous weapons”. While acknowledging that fully autonomous “killer robots” do not yet exist, the report nonetheless calls for governments to pre-emptively ban them because of the danger they pose to civilians in armed conflict. This ban should then be codified in an international treaty to prevent their development and procurement.

Central to the report is a series of assertions about the limits of autonomous robots. (1) Under the principles of international humanitarian law, they would be unable to discriminate between soldiers and civilians, and would be unable to assess the proportionality and necessity of a military attack. (2) Autonomous weapons would lower the threshold of going to war. (3) They would create an “accountability gap” by decentralizing the decision-making process. In short, the future of killer robots is grim, and the world must act now before the proverbial “Skynet” does indeed become self-aware and expel humans from the “loop”. (A Cambridge University centre has been set up to investigate this exact existential threat).

Humans, of course, are no guarantee of moral decision-making. In fact, historically, we’ve done a pretty lousy job of reigning in our collective violence. But the Human Rights Watch report insists that without homo sapiens to act as ethical guardians, “robots would not be restrained by human emotions and the capacity for compassion” (p.4) and “emotionless robots could, therefore, serve as tools of repressive dictators seeking to crack down on their own people” (p.4). While the report does recognize the limits of this argument, it sticks its ground. On the opposite end of the spectrum, we find figures fantasizing about robots able to make rational ethical decisions without the impediment of all that “messy fleshy” stuff to get in the way. This utopia is brilliantly satirized in John Butler’s “ethical governor” video, a reaction to John Arkin’s creation of the same name. Arkin argues that “fully autonomous weapons would be able to comply with international humanitarian law better than humans… They would not be influenced by emotions such as anger or fear. They could also monitor the ethical behavior of their human counterparts” (p. 28).

But between robotic bad guys unleashing Hellfire missiles on anything that twitches, and robotic saviours with “ethical adaptors” and strong AI, what are we actually left with?

Blueprints for the Future

Certainly, Human Rights Watch hits the nail on the head when it comes to military aspiration. The U.S. Department of Defense are scrambling to put together a coherent set of blueprints for autonomous futures. These “roadmaps” are a mixture of bureaucratic insider jargon, and genuinely eye-opening statements. The most recent one, the Unmanned Systems Integrated Roadmap FY2011 – 2036 states that it “envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure” (quoted on p.7). In a previous paper co-authored with Majed Akhter[iii], we looked at the U.S. Army’s 2010 publication, Unmanned Aircraft Systems Roadmap 2010-2035. It too contains a raft of meditations on the rise of autonomous drones. One of the most striking was the “SWARMS ability” performed by micro-drones called “Nanos”. Rather than becoming bigger and badder, such drones take their inspiration from biology and the natural sciences, resembling as they do, innocent-looking insects hovering in the atmosphere and cooperating with each other. According to the U.S. Army’s long-term plan (p.65):

“By 2025, Nanos will collaborate with one another to create swarms of Nanos that can cover large outdoor and indoor areas. The swarms will have a level of autonomy and self-awareness that will allow them to shift formations in order to maximize coverage and cover down on dead spots. Nanos will possess the ability to fly, crawl, adjust their positions, and navigate increasingly confined spaces”.

The report adds immediately below,

“Technological advances in artificial intelligence will enable UAS to make and execute complex decisions required in this phase of autonomy, assuming legal and policy decisions authorize these advances”.

It is far from clear how close we are to realizing these military ambitions. We could be some way off—far longer than the 30-year timeframe the majority of these official reports work within. There are glimpses of “paradigm shifting” technologies, such as the U.S. Navy’s X-47B or the U.K.’s “Taranis”, but many of these remain untested. Of course, not all autonomous weapons are the flying drones we typically imagine. The report discusses the Israel Defense Forces automated “sentries” that are positioned along its 60km border with Gaza and are able to detect movement within a kill zone of up to 1.5km (p. 15). Regardless of the timeframe – 30 years, 50 years or even 100 years – the report is right about one thing: there’s no turning back. As even the U.K.’s Ministry of Defence asked in their 2011 roadmap, “There is a danger that time is running out – is debate and development of policy even still possible, or is the technological genie already out of the ethical bottle, embarking us all on an incremental and involuntary journey towards a Terminator-like reality?” (Chapter 5, Section 21)

Rethinking Autonomy

I do think that the question concerning “autonomy” is the right one to be asking. But why limit ourselves to an overly technological definition? Does a robot really have to have advanced AI and sensing capabilities to be autonomous (or an “existential threat”?). I imagine autonomy much more generally, as the capacity to influence other things in the world. Thought of this way, we begin to recognize the power inherent in all objects, human or nonhuman. Cars, mobile phones, insulin, volcanoes, mosquitoes, and satellites are all impacting upon the world in spectacular and banal ways: opening and closing windows of possibility. As humans we adapt to these objects and our worlds change because of them. My contention is that military drones are already autonomous because they are shifting the very conditions of how people, places, and nations are related together. We are now in the age of the Predator – and we can’t turn the clock back.

Speaking to the intermingling of technology and society, Langdon winner wrote in 1993 that “Substantial technical innovations involve a reweaving of the fabric of society — a reshaping of some of the roles, rules, relationships, and institutions that make up our ways of living together”[iv]. He elsewhere added, “It is no surprise to learn that technical systems of various kinds are deeply interwoven in the conditions of modern politics. The physical arrangements of industrial production, warfare, communications, and the like have fundamentally changed the exercise of power and the experience of citizenship”[v].

This relationship between technology and modern politics is an intuition that many in science and technology studies share today. Objects are not simply lifeless puppets that wait around to be picked up by their human masters, they are political actors that reflect and refract power relations. Indeed, for Bruno Latour, objects are “what explains the contrast landscape we started with, the overarching powers of society, the huge asymmetries, the crushing exercise of power”[vi]. It is this object-oriented geopolitics that I am keen to explore in my own research.

We do not, therefore, have to wait until 2030 to discover what the impact of automated, remotely piloted warfare will be. It’s here right now. Already, the drone has changed the way the U.S. wages war. The fact that the Predator was an “unmanned” aircraft was vital in circumventing the Congressional War Powers Resolution of 1971. As Peter Singer, author of Wired for War, wrote, “Choosing to make the operation unmanned proved critical to initiating it without Congressional authorization”, adding “Like it or not, the new standard we’ve established … is that presidents need to seek approval only for operations that send people into harm’s way — not for those that involve waging war by other means”[vii].

Conclusion

We cannot unmake the drone any more than we can unmake the nuclear bomb. They both sit in the world as technological anchors and lock-in a trajectory few of us can change. The Human Rights Watch report “Losing Humanity” is a welcomed attempt to reign in the possible excesses of “killer robots” and I for one appreciate the thoughtful recommendations it contains. But humans do not have much humanity left to lose: we were always “more-than-human” creatures living in a more-than-human world. We left the loop a long time ago.

 —

Ian Shaw is a research fellow in the School of Geographical and Earth Sciences at the University of Glasgow. He writes on U.S. drone warfare and runs an online blog “Understanding Empire”. His most recent academic paper, “Predator Empire: The Geopolitics of U.S. Drone Warfare” is due to be published in the journal Geopolitics.



[i] Gregory, D. 2011. The everywhere war.  The Geographical Journal, 177(3): 238-250.

[ii] Turse and Engelhardt, 2012

[iii] Shaw, I. G. R. and Akhter, M. (2012). The Unbearable Humanness of Drone Warfare in FATA, Pakistan. Antipode, 44(4): 1490-1509.

[iv] Winner L, 1993, How technology reweaves the fabric of society.  The Chronicle of

Education, 39(48): B1-B3.

[v] Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1): 121-136.

[vi] Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network Theory. Oxford: Oxford University Press, p.72.

Further Reading on E-International Relations

Please Consider Donating

Before you download your free e-book, please consider donating to support open access publishing.

E-IR is an independent non-profit publisher run by an all volunteer team. Your donations allow us to invest in new open access titles and pay our bandwidth bills to ensure we keep our existing titles free to view. Any amount, in any currency, is appreciated. Many thanks!

Donations are voluntary and not required to download the e-book - your link to download is below.

Subscribe

Get our weekly email