Autonomous Weapons Systems and the Future of War

Military robots have been used in war in some primitive form since the beginning of the 20th century. Only recently they have moved from a marginal role to the center stage of contemporary military and intelligence operations. The most technologically advanced armed forces have invested heavily into the development of unmanned vehicles and robots of all kinds, ranging from the now very familiar Predator drones to IED robots, heavy unmanned ground vehicles, unmanned maritime vehicles, automated sentry guns, micro-robots, malicious software bots for sophisticated cyber attacks, and even nano-bots. The common denominator for all these new types of weapons is that they are becoming more and more automated and ultimately autonomous. Mainly academics and peace activists have recently voiced strong concerns over the prospect of ‘killer robots’ roaming the earth and indiscriminately going after human prey. Although some of these concerns are legitimate, they are nevertheless strongly influenced by works of science fiction such as the Terminator or Matrix movies, which tend to reduce the complexities of these issues to the very simple formula that machines are evil and will ultimately bring about our own doom. It is argued here that at the core of the potential dangers of military robotics is the human factor rather than the intrinsic nature or the intrinsic limitations or even the capabilities of machines. The main concern should not be making our military robots behave ethically, but making sure that the human military decision-makers will have a strong sense of ethics and are constrained by an effective legal framework that prevents them from abusing the tremendous technological capabilities that will be at their finger-tips within a decade or so.

Defensive vs. Offensive Autonomous Weapons

Highly automated defensive weapons have been around since the early 1980s and they include highly automated air defense and missile defense systems, smart mines, and other area defense weapons. Although they can be considered ‘autonomous’ in the sense that they can independently identify targets, trigger themselves, and sometimes independently pursue these targets, they cannot be used offensively because they lack important capabilities such as mobility, cognition beyond an ability to identify a narrow set of targets, and endurance. A human has to switch them on, has to refuel and reload them, and has to do the troubleshooting. Defensive systems also tend to operate in environments that are typically devoid of civilians such as in closed off areas like international borders, the high seas, the deep sea, and outer space. In short, few people have considered automated defensive systems to be more immoral than other weapons used for similar purposes. In fact, greater autonomy in the sense of a greater capability for discerning targets can result in much better humanitarian outcomes. Offensive autonomous systems are much more problematic since they could operate with few limitations in geographic areas that are occupied by civilians and could accidentally cause war crimes by misidentifying targets. The problem is that offensive roles are very difficult for robots and remain at current time beyond the existing technological capabilities. However, in the long run it is foreseeable that autonomous robots could slowly move up the ladder from purely defensive roles to more offensive ones, or from a rudimentary capability of responding to an intrusion or attack to actively seeking out targets in an extended geographic area and attacking them preemptively.

Thinking About Weapons System Autonomy

Autonomy in an engineering sense refers to the capability of carrying out its core mission with little or no human supervision. Autonomy is a spectrum and is not easily definable. Depending on the complexity of the function or mission to be carried out by a robot, the machine would need to be more or less capable of understanding key aspects of its environment and for making decisions. Some missions such as area defense are simple and require little intelligence other than opening fire at a predefined set of targets. Some missions are more complex and would require much greater cognitive abilities of the autonomous robot for operating successfully on its own.

  • Weapons Autonomy Based on Attacking Predetermined Unique Targets: the oldest autonomous weapon is of course the cruise missile, which can be programmed to independently attack a particular target following a pre-programmed course. This very limited type of autonomy has not raised major ethical concerns since humans still select the target and since the actual autonomy is limited to finding and attacking the pre-determined target.
  • Rule-Based Weapons Autonomy: the machine is given a set of parameters that govern its behavior and in particular its use of force. For example, the machine can be programmed to attack only targets that have very specific characteristics in a particular geographic area under very precise circumstances. A rule-based autonomous machine can be made to conform to the laws of war if certain safeguards are built into it. The downside to this approach is that the machine lacks any flexibility to adapt to changing situations and can be potentially easily defeated once an opponent figures out the exact behavioral limitations of the machine. In other words, these would be dumb robots.
  • Self-Learning Machines or True Autonomy: there are already software algorithms that enable machines to optimize results by continuously learning from trial and error. Potentially such self-optimization and techniques such as genetic algorithms could lead to ‘strong AI’ or machines that have comparable, if not superior cognitive abilities as humans. It would be very hard to maintain effective control over self-learning robots, as they could develop in directions and behave in ways not anticipated by human engineers. According to AI experts such as Google’s Eric Schmitt or inventor Ray Kurzweil human-like intelligence could arrive by 2029 or even within a decade. Even so, it is likely that machine intelligence will remain context-specific and will not be universal as is the case with human intelligence.

A Likely Scenario for the Use of Autonomous Weapons

Autonomous weapons that are governed by pre-programmed rules that cannot be changed by the machine itself are primarily useful for defensive missions such as area defense and would have little effectiveness in offensive roles where enemies are more sophisticated and capable of exploiting the cognitive limitations of such robots. Once the enemy is not a group of ragtag guerrillas, but a nation state capable of ‘anti-access area denial’ the chances for an effective offensive military use of weapons with extremely limited cognitive and behavioral abilities are very slim. It is also unlikely that any military would want to develop and use autonomous weapons systems that are inherently unpredictable and which could be smart enough to potentially question or reject orders from human commanders. For these reasons the most technologically advanced armed forces will tend to rely for offensive missions on the use of autonomous weapons that attack pre-programmed unique targets. For example, an autonomous micro-drone could be sent to search for a particular individual using biometrical identification methods (e.g. facial recognition or DNA analysis) and kill this individual with high precision and with no collateral damage. Alternatively, in the field of high-tech high-intensity warfare it makes most sense to combine human soldiers with robots that have a limited autonomy. The human soldiers could direct the robots to carry out the most dangerous tasks and could vastly amplify the lethality and effectiveness of manned platforms. For example, the really revolutionary aspect of the F-22 fighter jet is not its stealth, speed or agility, but its ability to control up to 40 drones that can fly ahead of the manned jet to do reconnaissance and clear threats in its path. To make the vision of human-robot teams a reality it can be expected that human soldiers might need to be technologically ‘upgraded’ in terms of performance-enhancing drugs, biochips/ neurochips, nanotechnology, and so on so that they can perform better alongside robots. This would obviously raise some very fundamental ethical questions in and of itself. For example, while neurochips can be used for enabling soldiers to communicate better with robots, they could also be used for ‘roboticising’ soldiers – making them compliant to any order they receive from their commanders.

Conclusion

Although it can’t be ruled out, the idea that modern armed forces would be interested in building merciless and rather stupid ‘killer robots’ and unleashing on the battlefield in a relentless pursuit of military victory at all costs seems very flawed. Western militaries are conscious of the laws of war, the bad publicity of killing innocent civilians, and the extent to which this would undermine the strategic objectives behind counterterrorism and counterinsurgency efforts. In the end, the contentious issue about autonomous weapons is not whether they can be made or used to comply with our standards of ethics in war. Of course, they can. The real issue is how governments and their militaries will use autonomous weapons in pursuit of political and military objectives. Some governments might have good intentions and equip robots with an ethical programming. But experience shows intentions can change. Faced with overall defeat, such a government might remove ethical safeguards in the hope of thereby averting a military disaster. The result could be a massacre. A criminally minded government might use autonomous weapons in a most horrific manner against their own population. Military robots would not hesitate to carry out ethnic cleansings and genocide, while human soldiers normally would. It is part of human nature to have an inhibition to kill and it is typically only a very small number of psychopaths and sociopaths within a population, who can be turned into highly effective killers. With robots there is no inherent inhibition to kill. As a result, future genocides could be conducted much faster and on a much more massive scale than ever before in history. Political repression could be enabled by an unchecked proliferation and usage of autonomous systems to and by police forces. Police drones may continuously circle the sky, constantly spying on citizens, and they might be authorized to incapacitate or kill all those with Tasers and directed energy weapons, who are on watch lists or considered ‘dangerous’ for whatever reasons. Therefore it would be best to internationally limit the uses and capabilities of autonomous weapons, especially uses outside of war zones (e.g. domestic counterterrorism and police uses) and especially with respect to offensive roles. The scariest aspect of ‘killer robots’ is not that they may become self-aware and decide to wipe out humanity – it is what some criminal governments could potentially do with autonomous weapons to their own civilian populations.

Armin Krishnan is an Assistant Professor for Security Studies at East Carolina University. He is the author of Killer Robots: Legality and Ethicality of Autonomous Weapons (Ashgate 2009) and a German language book on targeted killing (Gezielte Tötung: Die Zukunft des Krieges, Matthes & Seitz Berlin Verlag 2012).           

Further Reading on E-International Relations

Please Consider Donating

Before you download your free e-book, please consider donating to support open access publishing.

E-IR is an independent non-profit publisher run by an all volunteer team. Your donations allow us to invest in new open access titles and pay our bandwidth bills to ensure we keep our existing titles free to view. Any amount, in any currency, is appreciated. Many thanks!

Donations are voluntary and not required to download the e-book - your link to download is below.