Commenting on the operation of unmanned aerial vehicles (UAVs), more commonly known as military drones[1], an analyst is cited as saying “It’s like a video game. It can get a little bloodthirsty. But it’s fucking cool” (Sparrow, 2009, p. 184). This quote illustrates the perils that the new and evolving technology of military drones embodies and the ethical ramifications inherent in the use of such a technology. Throughout history, man has had to grapple with the dangers brought about by technological breakthroughs. Yet, some of the twenty-first century technologies including robots (military or otherwise) pose a qualitatively different threat than technologies that have come before (Joy, 2006; Singer, 2009a). Joy points out that every design and use of a new technology brings with it unintended consequences that may be disastrous (Joy, 2006). Especially in the field of military robotics, then, with its inherently lethal potential, there are serious implications for law, ethics and morality. More often than not, morality and ethics are struggling to keep up with new and emerging technologies and this can generate ethical lacunae and uncertainty (Singer, 2010). The present paper aims to answer the question of how the challenges of ‘moralizing’ military drones can be addressed.
This paper is divided into two main parts. In the first part, the relevant theoretical concepts will be introduced and elucidated and in the second part, these concepts will be applied to the case of military drones. Firstly, the nexus between the human and technological domains and morality will be presented. Secondly, the concept of technological mediation will be introduced and its meaning for moral decision-making will be elaborated on. Thereafter, the resulting ramifications for the design of technology will be highlighted and the notion of ‘moralizing’ technology will be explicated. Subsequently, the above theoretical concepts will be utilized to analyze the various ethical challenges with regard to military drones. In doing so, it will be investigated how these challenges can be addressed by ‘moralizing’ drone technology.
Ethics, Morality and Technology
To begin with, the commonly held notion that morality belongs solely to the human domain needs to be discarded. Morality is also an affair of things and therefore morality and ethics have a material dimension (Verbeek, 2011). This might seem like an outlandish claim and therefore some deeper investigation may be called for. The most obvious place to start analyzing the above premise would seem to be the field of engineering ethics, since the design process is commonly regarded as the place where individuals (engineers) take responsibility for the moral aspects of the technologies they create. However, engineering ethics currently takes mainly an externalist approach to technology. This means that there is a basic assumption of a radical separation between the technological and the societal spheres and ethical reflection is limited to the engineer’s individual responsibility to perform conscientious risk assessment and to ‘blow the whistle’ in the design stage if he/she sees immoral design practices or evident immoral consequences of certain innovations (Swierstra & Jelsma, 2005). Technologies are viewed here in a purely instrumental manner; they are intended to perform a function and if they do so in an unethical way then the engineer has the responsibility to make sure that the innovations in the technological sphere do not have (ethically) adverse effects in the human realm (Verbeek, 2009). In the instrumentalist view, technology is seen as a neutral instrument that has no moral significance in and of itself. It is how humans use the technological artifact that shapes the ethical aspects. The fundamental limitation of this approach is that it disregards the inextricable intertwinement between the technological and societal spheres that lies at the heart of Science, Technology and Society Studies (STS).
In order to fully understand this intertwinement and the moral dimension of technological artifacts it is useful to utilize the concept of technological mediation. Technologies are more than neutral ‘intermediaries’; they actively co-shape how people perceive reality and actively co-shape people’s actions (Ihde, 1990; Latour, 1992). The use of the term ‘co-shape’ is crucial here to avoid reverting to classical technological determinism in which technologies determine human behavior and where little room is left for human agency (and, by extension, human responsibility) (MacKenzie & Wajcman, 1999). Instead, when technologies are used, they help constitute people’s perception of reality and inform the human decisions that are taken within that constituted reality. For instance, technologies like obstetric ultrasound play a crucial role in defining human illness and health, as ultrasound enables the detection of serious congenital defects, thereby ‘translating’ the fetus into a possible patient with preventable suffering and informing a new set of ethical choices for the parents (Verbeek, 2011). As a result of the mediating role of ultrasound technology, parents are faced with the responsibility of making difficult moral decisions about the lives of unborn children. At the same time, however, technologies do not determine the decisions humans make but merely inform them by constituting humans’ perceived reality and their actions. Hence, “moral decision-making is a joint effort of human beings and technological artifacts” (Verbeek, 2011, p. 206).
So far, it has been established that technology mediates how humans perceive reality and that it also mediates human actions by playing a role in discouraging certain actions and inviting others. Given that ethics is about the question of how to act, it follows that technological mediation reveals an inherently moral dimension in technology (Verbeek, 2009). In fact, technological artifacts are bearers of morality since they consistently take various moral decisions for humans. For example, a speed bump invites people to choose to either slow down when driving or damage their car’s shock absorbers (Latour, 1992). Similarly, alcohol locks for cars make a moral decision for people in hindering a person from driving while drunk. While technologies cannot be moral agents in themselves (since that requires intentionality and a certain autonomy), it is evident that technologies embody some kind of morality in the context of human-technology relations (Verbeek, 2009). Moreover, since human autonomy of action is mediated by technology, human intentionality is co-shaped by technology. As a result of this active co-shaping, it can be concluded that moral responsibility needs to be borne not solely by humans or solely technologies but also by the complex blend of humans and technological artifacts (Verbeek, 2009).
This conclusion about joint moral responsibility resulting from the pervasiveness and inevitability of technological mediation in our technological culture has major ramifications for engineering ethics and the design of technology. As the notion of technological mediation illustrates, the design of technologies needs to comprise more than only technical risk calculations. When technologies are used in societies, they mediate our human experiences and co-shape our moral decision making. Consequently, the design of technological artifacts is inherently a moral activity and it is necessary to take a holistic approach to considerations of ethics and responsibility. Even without explicit ethical reflection, engineers designing artifacts are “doing ethics by other means” (Verbeek, 2011, p. 207) and it is desirable, therefore, to aim for an explicit ‘moralization of technology’ (Achterhuis, 1995). This means proactively embedding desirable moral values in technologies to shape our actions for the better. Accordingly, engineers should focus not only on the obvious functions of the technological artifacts they create but also on their mediating roles and the ethical implications that result (Verbeek, 2011).
The ‘moralization of technology’ is a complex and difficult task that requires the anticipation of mediations. In addition to the fact that the future social impacts of technologies are notoriously difficult to predict, the designers and engineers are not the only ones contributing to the materialization of mediations. The future mediating roles of technologies are also shaped by the interpretation of users and emergent characteristics of the technologies themselves (Verbeek, 2009). This means that designers of artifacts cannot simply ‘inscribe’ a certain morality into technologies but that the capacity for ‘moralizing’ a specific technology will depend on the dynamics of the interplay between the engineers, the users and the technologies themselves. While predictions about a technology’s future mediating role is undoubtedly a very complex task, it could be assisted by a so-called ‘mediation analysis’, like the one proposed by Swierstra and Waelbers (2010). These scholars designed a matrix for the technological mediation of morality which provides engineers with a framework for reflecting more deeply on the prospective social role of the artifacts they design.
While such a ‘mediation analysis’ may aid engineers in their ‘moralization of technology’ to a certain extent, it does not adequately address the criticism that the design process associated with technology is undemocratic. However, an alternative method entitled ‘Constructive Technology Assessment’ (CTA; cf. Rip, Misa & Schot, 1995) tries to democratize the design process by engaging all relevant stakeholders during the development process. The premise underlying CTA is that technologies are not ‘fixed’ and their development is neither linear nor spontaneous but rather the result of complex interactions of the different actors involved. As such, all relevant actors have a stake in the moral operation of the socio-technical ensemble and therefore the democratization of the technology design process contributes to the ‘moralization of technologies’ in a broader sense (Verbeek, 2009). This is precisely what STS scholars intend to achieve by opening the black box of technology and analyzing the complex dynamics of its design. In the following, some important moral challenges with regard to military drones will be analyzed utilizing the theoretical concepts presented thus far and possible ways to address these challenges will be discussed.
The Case of Military Drones
The US military alone already employs several thousand unarmed aerial vehicles all over the world and also the number of lethal UAVs has seen a spectacular 1,200% increase over the past six years (“Flight of the drones,” 2011). With over forty countries now developing unmanned aerial systems, it is indispensable to seriously discuss the moral and ethical issues linked to this technology (Singer, 2009a). Using drones may give rise to a number of unanticipated problems and ethical challenges.[2] As has been established above, technologies mediate how humans perceive reality and also co-shape people’s decision-making. In most cases, for example when looking through a pair of binoculars, we do not think twice about trusting the mediating effect of technology because we trust the physical laws of optics and there are usually no ethical dilemmas that result from this situation (Sullins, 2009). By contrast, in the case of military drones, it is evident that the military pilot who is controlling the drone usually from thousands of miles away perceives reality in, say, Pakistan’s tribal areas, through “machine vision” (Salvini, 2007, p. 156). This means that the infrared camera and other high-tech video sensors on the robotic drone filters and purges (visual) information, thereby creating an mediated image which is transferred to the remote pilot’s screen. As a result of this mediation of reality, the tele-operators of drones see the world differently than they would without this “telepistemological distancing” (Sullins, 2009, p. 268). This refers to the effect that the battlefield information mediated through the technologies on remotely operated drones has on the tele-operator’s epistemological perception of the remote reality and hence begs the question of how this distancing impacts the pilot’s ability to make ethical decisions.
As we have seen from the theoretical discussion earlier, the very decision to fire a lethal weapon is co-shaped by technological artifacts (in this case the drone’s sensors and the broader technical system of communication between the pilot and the UAV). Therefore, it is insufficient to assume that since a moral agent is controlling the drone, the actions of the socio-technical ensemble must be moral. The reason for this is that the inherent design of military drones limits the ability of humans to be in full intelligent control of them (Sullins, 2009). On the one hand, the telepistemological and, by extension, physical distancing in the form of a real-time image on a remote screen serves as a protective shield which minimizes (and effectively moves towards zero) the threat to the human soldier and this aspect is arguably ethical and morally good.[3] On the other hand, however, this same distancing effect can lead to the dangers of abstraction and moral disengagement (Salvini, 2007). This means that this type of warfare could contribute to dehumanization and disregard for the moral agency of enemy combatants in drone pilots’ perceptions (Sullins, 2009). These moral buffers may have an adverse effect on people’s adherence to the rules of engagement and the laws of armed conflict and could co-shape people’s actions in such a way as to make war crimes more likely (Sparrow, 2008).
Recalling the notion that moral responsibility must be borne jointly by humans and the technological artifact, several points are in order concerning the design of UAVs. First, since the user interface that pilots use is the primary way in which operators perceive the remote reality on the battlefield, it is expected that that the design of this interface co-shapes the decisions of life and death that remote pilots make. Therefore, it can be argued that engineers should have a responsibility to try as best they can to “build ethics into the user interface” (Sparrow, 2008, p. 181). Obviously, this is a tremendously difficult task and the details of how exactly to do this are beyond the scope of this paper but in essence the designers should work to design an interface that facilitates killing where it is justified and discourages it where it is not (Cummings, 2006); in other words, they should try to “desig[n] out war crimes” (Sparrow, 2008, p. 179). Engineers could do this by striving to build ‘good’ sensors and interfaces that present the operator with the necessary moral information relevant to the combat situation. Furthermore, drone technology can be ‘moralized’ by counteracting the creation of ‘moral buffers’ between the pilot and his/her actions. This entails ensuring that the extent to which operators rely on the increasing amount of automation in military drones does not exceed a threshold where the human feels less responsible for an action since it was fully or partly the machine that made the decision (Sparrow, 2008). Here, the co-shaping role of technology is clearly evident and the moral challenges inherent in the design of military drones cannot be underestimated. While by no means disregarding the moral agency of drone operators, the concept of technological mediation highlights the ethical importance of the ‘moralization’ of drone technology through engineers at the design stage. To this end, engineers might do well to perform a ‘mediation analysis’ in order to better anticipate the mediating effects that their designs have on human perception and action.
Cognizant of the poor ethical record of human soldiers with the technology currently available, some scholars have started talking about the implications of taking the design of drones one step further, towards semi- or fully-autonomous uninhabited aerial vehicles where humans may remain ‘in the loop’ but more in a monitoring than a controlling role (Arkin, 2009; Sullins, 2010). The roboticist and roboethicist Ronald Arkin (2009) has developed a prototype of a so-called ‘ethical governor’ which should enable robots to do the right thing. His basic premise is that, with time, it will be feasible to program military drones so that they behave more ethically on the battlefield than humans (Arkin, 2009). While Arkin admits that the algorithms for his ethical governor are not yet ready for use in current combat operations, he is convinced that there is no “fundamental scientific limitation to achieving the goal” (Orca, 2009). In his own way, then, Arkin strives to ‘moralize’ drone technology by embedding in it an ethical algorithm and thereby turning the autonomous robotic systems into artificial moral agents of their own. This is manifest illustration of how the designers of a technology are “doing ethics by other means” (Verbeek, 2011, p. 207). Moreover, some scholars in the field view the development and deployment of fully autonomous military drones as an inevitable outcome of the combination of the vast research and political will directed towards this technology (Sullins, 2010; Wallach & Allen, 2009). However, this is highly questionable when viewed through an STS framework, which maintains that technological development does not follow some predetermined inner logic but rather that technological development is the outcome of complex negotiations between relevant actors. By the same token, a senior US military official has commented that the limiting factor in the development of drone technology will more likely be policy than the technology itself (“Flight of the drones,” 2011).
When it comes to policy, a further ethical challenge of drone technology is that it may make warfare more likely to occur. While the above-mentioned distancing effect has the morally good result of removing soldiers from harm’s way, the tele-operation of military drones also entails the unfortunate consequence of potentially increasing the likelihood of war. The rationale behind this is that it is politically much more palatable to deploy robotic drones rather than human soldiers and that using drones appears to be an attractive way to fight ‘clean’ and surgical wars. In other words, as the political and human cost of fighting wars diminishes, the threshold of decision-makers to start an armed conflict is considerably lowered (Singer, 2009a). When the likelihood of war increases as a result of drone technology, this is obviously not an ethical outcome and therefore more scrutiny of military drone activities is needed (Sullins, 2009). Moreover, keeping in mind that technological artifacts are not merely neutral means but co-shapers of our actions, it becomes evident that drones help constitute the conditions of possibility that re-shape our practices and influence our ends (Coeckelbergh, 2011). What is more, flexibility of interpretation still exists with regard to what doctrines the American military will adopt when organizing its drones and therefore a serious public discussion of how this technology might influence military ends is called for (Geraci, 2011; Singer, 2009b).
Finally, a public discussion about drone technology should not limit itself to the influence on military doctrine but should also address the challenge of democratizing the development of military robotics including lethal drones with the end of contributing to the ‘moralization’ of the technology. There have already been several initiatives taken in this direction. For instance, Moshkina and Arkin (2007) conducted a survey aimed at better understanding the expectations that various demographic groups (the public, researchers, policy-makers and military personnel) hold regarding battlefield robotics. If the results of this study (and similar ones) are seriously taken into account by the engineers and other groups directly involved in the design process of these technologies, this may bring us a step closer to ‘moralizing’ military drone technology by democratizing the design process of military robotics. It must be pointed out here that it is questionable how much democratic influence is desirable when it comes to designing military technology due to the sensitive nature of this field. Nevertheless, there have been other serious efforts to congregate experts of various relevant fields (inter alia engineers, international lawyers and military ethicists) in order to collaborate more effectively in addressing the ethical and legal implications of new military robotics (Lucas, 2010). One of the most prominent forms of democratic deliberation on this subject finds its embodiment in the form of the ‘Consortium on Emerging Technologies, Military Operations, and National Security’ (CETMONS), which was initiated in 2008 (Lucas, 2010, p. 291). By engaging relevant stakeholders in the development process of drone technology, this consortium appears to be in line with the above-mentioned ‘Constructive Technology Assessment’ approach developed by Rip, Misa and Schot (1995). Hopefully, in this way, STS scholars’ goal of opening the black box of technological development can contribute to society being better able to address the challenges surrounding the ‘moralization’ of military drones.
Conclusion
This paper analyzed how the challenges of ‘moralizing’ military drones can be addressed. Firstly, it was demonstrated that ethics and morality have both a human and a technological dimension and that the two spheres are in fact closely intertwined. Secondly, the concept of technological mediation was introduced and it was examined how technologies co-shape both our perceptions of reality and our moral agency. Thereafter, it was argued that the inevitable technological mediation in our technological culture means that technology design is an inherently moral process and that engineers should aim to ‘moralize’ technologies so as to shape our actions for the better. Subsequently, these theoretical concepts were applied to the case of military drones. It was shown how understanding the mediating effects of this technology and ensuring wider participation in the development process of drone technology can help us ‘moralize’ their design. By focusing on the ethical importance of a good user interface and utilizing public deliberative fora we may be able to address some of the moral challenges that drones entail.
Since technological artifacts stabilize human relationships by co-shaping our perceptions and actions in our constructed environment, it is of utmost importance that we design military robots with human priorities foremost in mind (Geraci, 2011; Latour, 2005). Our technologies offer both peril and promise, enabling as well as constraining our moral actions and therefore we must heed the consequences that technological mediation has for our ability to act ethically. Realizing the fundamental importance of gaining a better understanding of how to address the ethical challenges of military drones, there has been an “ethics surge” (Lucas, 2010, p. 292) in the field. However, recalling the quote at the beginning of this paper and presuming that we do not want drone pilots making life and death decisions with the feeling that they are merely playing a video game, it appears that much work remains to be done in ‘moralizing’ drone technology design in order to promote more ethical behavior on the remote battlefield.
Reference List
Achterhuis, H. (1995). De moralisering van de apparaten. Socialisme en democratie, 52(1), 3-12.
Arkin, R. C. (2009). Governing lethal behavior in autonomous robots. Boca Raton, FL: Chapman & Hall/CRC.
Coeckelbergh, M. (2011). From killer machines to doctrines and swarms, or why ethics of military robotics is not (necessarily) about robots. Philosophy & Technology, 24(3), 269-278.
Cummings, M. L. (2006). Automation and accountability in decision support system interface design. Journal of Technology Studies, 32(1), 23-31.
Flight of the drones. (2011, Oct. 8). The Economist. Retrieved Oct. 21, 2011, from http://www.economist.com/node/21531433
Geraci, R. M. (2011). Martial bliss: War and peace in popular science robotics. Philosophy & Technology, 24(3), 1-16.
Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Indianapolis, IN: Indiana University Press.
Joy, B. (2006). Why the future does not need us. In A. Teich (Ed.), Technology and the future (pp. 115-136). New York: St. Martin’s Press.
Latour, B. (1992). Where are the missing masses? The sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping technology (pp. 225-258). Cambridge, MA: MIT Press.
Lucas, G. R. (2010). Postmodern war. Journal of Military Ethics, 9(4), 289-298.
MacKenzie, D., & Wajcman, J. (Eds.). (1999). The social shaping of technology. Buckingham, UK: Open University Press.
Moshkina, L., & Arkin, R. C. (2007). Lethality and autonomous systems: Survey design and results. Georgia Institute of Technology, Atlanta, GA.
Orca, S. (2009, Jul. 24). Teaching robots the rules of war. h+ Magazine. Retrieved Oct. 20, 2011, from http://hplusmagazine.com/2009/07/24/teaching-robots-rules-war
Rip, A., Misa, T. J., & Schot, J. (Eds.). (1995). Managing technology in society – the approach of Constructive Technology Assessment. London: Pinter.
Royakkers, L., & Van Est, R. (2010). The cubicle warrior: The marionette of digitalized warfare. Ethics and Information Technology, 12(3), 289-296.
Salvini, P. (2007). The ethical and societal implications of presence from a distance. In L. Moreno (Ed.) PRESENCE 2007: The 10th Annual International Workshop on Presence (pp. 155-158). Barcelona: Starlab Barcelona S.L.
Singer, P. W. (2009a). Military robots and the future of war. [Video file]. TED Conferences. Long Beach, CA. Retrieved Oct. 20, 2011, from
Singer, P. W. (2009b). Wired for war?: Robots and military doctrine. Joint Force Quarterly, 52(1), 104-110.
Singer, P. W. (2010). The ethics of killer applications: Why is it so hard to talk about morality when it comes to new military technology? Journal of Military Ethics, 9(4), 299-312.
Sparrow, R. (2009). Building a better warbot: Ethical issues in the design of unmanned systems for military applications. Science and Engineering Ethics, 15(2), 169-187.
Strawser, B. J. (2010). Moral predators: The duty to employ uninhabited aerial vehicles. Journal of Military Ethics, 9(4), 342-368.
Sullins, J. P. (2009). Roboethics and telerobotic weapons systems. Paper presented at the IEEE International Conference on Robotics and Automation, Kobe, Japan.
Sullins, J. P. (2010). RoboWarfare: Can robots be more ethical than humans on the battlefield? Ethics and information technology, 12(3), 263-275.
Swierstra, T., & Jelsma, J. (2005). Trapped in the duality of structure: An STS approach to engineering ethics. In H. Harbers (Ed.), Inside the politics of technology: Agency and normativity in the co-production of technology and society (pp. 199-227). Amsterdam: Amsterdam University Press.
Swierstra, T., & Waelbers, K. (2010). Designing a good life: A matrix for the technological mediation of morality. Science and Engineering Ethics, 18(1), 157-172.
Verbeek, P.-P. (2009). The moral relevance of technological artifacts. In P. Sollie (Ed.), Evaluating new technologies (pp. 63-77). Dordrecht, Netherlands: Springer.
Verbeek, P.-P. (2011). Designing morality. In I. v. d. Poel & L. Royakkers (Eds.), Ethics, technology, and engineering: An introduction (pp. 198-216). Malden, MA: Wiley-Blackwell.
Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. New York: Oxford University Press.
[1] This paper will deal primarily with tele-operated, lethal unmanned aerial vehicles (UAVs) like the Predator or Reaper drones employed by the US military today. These are flying military robots that are generally operated by military pilots from thousands of miles away and are equipped with various sensors and guided weapons. They have the capability to identify, track and follow targets and to launch missiles or drop bombs.
[2] Due to space limitations, this paper must limit its scope to the ethical challenges of drones most relevant for the analysis of technological mediation and technological ‘moralization’. One challenge that has been left out is how to deal with the psychological stress of remote operators, also known as ‘cubicle warriors’. For a detailed discussion on this subject see Royakkers and Van Est, 2010.
[3] Strawser (2010) even goes so far as to argue for an ethical duty to employ UAVs on the grounds that there is an obligation to prevent unnecessary potential lethal risk to human life (the military pilot) where this is possible.
—
Written by: Christopher Newman
Written at: University College Maastricht
Written for: Dr. Jessica Mesman
Date written: 10/2011
Further Reading on E-International Relations
- ‘Drone Vision’: Precision Ethics Theory and the Royal Air Force’s use of Drones
- Beyond Agent vs. Instrument: The Neo-Coloniality of Drones in Contemporary Warfare
- Cuban Intelligence after the Cold War: A Case Study in Adaptation and Influence
- A Critical Assessment of Eco-Marxism: A Ghanaian Case Study
- Imperialism’s Legacy in the Study of Contemporary Politics: The Case of Hegemonic Stability Theory
- Ontological Insecurity: A Case Study on Israeli-Palestinian Conflict in Jerusalem