New Technologies, New Wars
Today, an ever-increasing number of countries use unmanned aerial vehicles (UAVs), often called drones, not only for civil applications, but also for military purposes such as reconnaissance and killing people. A new form of war has resultantly emerged – if it can still be called “war” at all – whereby specific individuals are targeted and killed from a distance. This evolving practice raises a number of ethical issues, which are now receiving much attention from academics (see, for example, Sullins 2010, Coeckelbergh 2013, Coeckelbergh 2011, Sharkey 2012), politicians (see this UN report by Heyns), military organizations, and the public.
This article briefly examines the ethics of these “killer robots,” paying particular attention to the ethical issues brought to the fore by this new technology. When, if at all, is it justified to use these robotic technologies for killing people?
Justifying War
The first problem, when looking at the ethics of war, concerns the justification of war itself. While most people may think that using violence for individual self-defense when attacked may be ethically acceptable under certain circumstances, it is an entirely different matter to say that the use of violence by military forces operating at the nation-state level is, by definition, justified. Moreover, even if you have no ethical problems with war as such, it is usually not obvious that war is a justifiable act in specific situations. There is a long tradition in political philosophy which attempts to determine what, exactly, constitutes a “just war” – that is, the conditions under which it is legitimate and right to start a war (ius ad bellum) and the ethical principles which should be followed in the course of war activities (ius in bello). (For more on Just War Theory, see here and here.)
But imagine, for the sake of argument, that all these principles and conditions are satisfied: suppose that military violence is not wrong in principle, that the specific war in question is justified, and that military actions in this war follow the principles of just war. Is the use of drones for killing people then justified? I argue that it is (still) ethically highly problematic for the following three reasons.
Targeted Killing and “Easy” War
First, the use of drones for killing specific individuals is a form of killing which, by itself, differs considerably from how killing was undertaken in the major wars of the 20th century. Targeted killing is different from “anonymous” forms of killing en masse, where the enemy is not perceived as comprising of one or more individuals. Targeted killing more closely resembles assassination, a planned lethal attack on an individual for political or, indeed, military purposes, and it is unclear whether there really is a thin line between the two. In other words, it is questionable if this kind of killing can still be described by using the term “war” unless war is re-defined, and it is unclear if the principles of just war can (or should) be applied to this form of killing. The principles of ius ad bellum, for instance, tell us when it is legitimate to start a war. But where killer robots are concerned, it is uncertain if a war has been started if one uses this technology.
In addition, even if one were to accept that such actions count as “war,” it seems to be remarkably easy to start a war of this sort. This creates a different scenario from the historical one, when starting a war literally involved mobilizing an entire nation. In particular, because it is relatively cheap (in terms of both financial and human cost) and easy to deploy drones, there is the danger that the decision is taken too lightly and that the principles of just war are not followed, either because the action is not defined as war or because the war has already started before ethical and legal principles come into play.
Killing from a Distance and New Forms of Intimacy
Second, the kind of “killer robots” under consideration here – unmanned aerial vehicles – are operated from a distance. This implies that the killing is also done from a distance, and this by itself raises serious ethical issues. It is well known in military psychology (see Grossman 1996) that killing from a distance is generally easier than killing at proximity. Here the problem is similar to other distance technologies, such as manned airplanes that drop bombs from a high altitude (consider aerial bombing during World War II, including the dropping of atomic bombs). If the person who pushes the button does not see what they are doing to those on the ground, then it is questionable if the killing is justified. Knowing what you are doing is an essential condition in order to undertake responsible action, and if distance makes killing all too easy, then the natural moral-psychological barrier to killing is removed. There is no place for empathy, no knowledge of the suffering one causes, and the distance between killer and target seems unbridgeable. Killing by drone, therefore, seems almost to be a computer game. The operators go to their compounds during the day, like many of us go work, and in the evening they can go back home to their families. There is no dirty killing, or so it seems. The blood and the suffering, if at all visible, are not directly experienced.
As I have argued recently (Coeckelbergh 2013), however, contemporary sensor technologies have advanced so much that it is increasingly possible for operators to see exactly what they are doing to people on the ground. As operators track their targets and follow their daily lives, they are given the opportunity to develop empathy, which does provide a barrier to killing, even if this killing is remote. To some extent, at least, the camera technology bridges the distance, distinguishing the drone operator from the World War II bomber. As in the case of many other contemporary electronic information and communication technologies, a new kind of “intimacy” is created, albeit intimacy at a distance: the drone operators become rather familiar with their targets, who may now no longer appear as merely names on a list, but as fathers, husbands, people making their way home, etc. They know the lives of their targets as they observe them over a long period of time, plausibly softening the distancing effects of the drone technology. An indication that this is indeed happening lies in the reported psychological problems faced by drone operators (see this article in the New York Times).
That being said, the distance still counts, from an ethical perspective. The killing situation is still different from soldiers (and civilians) on the ground, who are more directly involved in human-military and social situations, and who know that the person they are killing (or not) is a human being with a face, a name, and perhaps a family.
“Machine Killing” and Automated Killing
Third, “machine killing” is also problematic if, and to the extent that, it is automated. New types of drones are being developed that do not just fly automatically (which is not really new, as autopilots fly passenger airplanes and this is generally perceived by the general public as unproblematic), but also kill automatically. This raises the important ethical question whether it is justifiable at all that machines could make such life and death decisions autonomously and that machines could kill “by themselves,” automatically, without (direct) human intervention.
In a recent conference paper, I distinguished between what I think are two good arguments against automated machine killing. My first argument is rooted in the assumption that, in order to make a decision about life and death – if such a decision is at all morally permissible – human capacities of moral reason and feeling are needed. Since killer robots lack these capacities, in particular the emotional ones (they may have some capacity to “reason” by means of algorithms), decisions about killing should not be delegated to them. My second argument emphasizes a different asymmetry: one of different vulnerabilities and experiences of these vulnerabilities. Machines cannot be “hurt” and they cannot be “killed.” Yes, they are also vulnerable, but in a very different way, and, importantly, they do not experience their vulnerability. This implies that killer robots cannot know what they are doing to humans. They cannot even understand the threat they are posing to these humans. I conclude from these arguments that killing should not be automated in any way – whether or not this is by means of killer robots – and that if one engages in killing at all, and if the killing and the war are justified at all, humans should always, crucially, and to a large extent be involved in “killer” actions.
Should We Stop Killer Robots?
On the one hand, killer robots pose questions that are not new, such as problems related to the justification of war and, partly, the problem of distance. On the other hand, the technology also raises new problems: it may make it easier to start a war; it is not clear how this type of targeted killing differs from assassination; drones create a moral distance between killer and target, even if, at the same time, sensor technologies bridge the distance and render the killing psychologically more difficult; and automated killing seems an especially problematic possibility, given the asymmetries between humans and robots when it comes to their capacities to make moral decisions and the nature of their vulnerabilities.
I conclude that there are serious ethical problems with using this technology in warfare, problems which may justify prohibiting certain types of killer robots, such as automatic lethal machines and drones which lack sufficiently sophisticated cameras and other equipment on board to help bridge the distance. Finally, the practice of using these robots for targeted killing raises the question if this practice still counts as “war” at all, and if it does, how these technologies re-define what we mean by the term to begin with.
Deciding about killer robots means deciding about the future of war and killing. Responsible decisions about these issues should, therefore, only be taken by those who are fully informed about not only what these new technologies do, but also what they might signify for the future.
Further Reading on E-International Relations
- Technological Terror, Killer Robots, and Black Mirror’s ‘Metalhead’
- Visualising the Drone: War Art as Embodied Resistance
- Enhanced Ethics for Military Users of Armed Drones
- Historical and Contemporary Reflections on Lethal Autonomous Weapons Systems
- Droned Lives, from Gaza to the World
- A Critique of the Canberra Guiding Principles on Lethal Autonomous Weapon Systems