Skip Navigation

Killer Robots: Not Just Science Fiction

Military weaponry is evolving, and we are getting closer to creating weapons systems able to make killing decisions without human intervention. These so-called “killer robots” are on their way, and, in order to save lives, it is important to halt their development before they arrive on the military scene. Although remote-controlled drones have the capacity to kill from afar, fully autonomous weapons would go a step further, selecting and engaging targets without any human agency. These weapons do not yet exist, but the technological shift from human “in-the-loop” systems to “out-of-the-loop” systems is slowly but surely occurring, and attracting international concern in the process. Some have predicted that the “killer robots” could arrive within 20 years.

In film, this threat is portrayed as malicious machines hell-bent on destroying humans. While exaggerated, some of these perceptions are not wholly inaccurate. Autonomous weapons systems (AWSs) are defined by the US Department of Defense as “weapon system(s) that, once activated, can select and engage targets without further intervention by a human operator.” To many, this represents the end of human reasoning on the battlefield. And the cornerstone of human reasoning – especially in life or death situations – is the capacity for empathy. Critics of autonomous weapons say that empathy simply cannot be programmed into a machine.

This removal of human checks on weapons systems has serious implications for current international humanitarian law. Human Rights Watch’s evaluation of fully autonomous weapons demonstrates that even with compliance mechanisms proposed by some military strategists, these robots would be “incapable of abiding by the key principles of international humanitarian law” – specifically the rules of distinction and proportionality. Distinction is the rule that requires armies to distinguish between “combatants” and “noncombatants.” This rule is one of autonomous weapons systems’ greatest obstacles of compliance with international law. Fully autonomous weapons would likely be unable to differentiate between soldiers and civilians. Particularly in current combat environments, in which civilian shields are being used more and more frequently and enemy combatants have taken to hiding in populous areas, distinguishing between targets and the innocents surrounding them is increasingly difficult.

The seemingly unbiased nature of robots could be seen as positive. Yes-or-no questions like “is this individual a terrorist” should be easy for a machine to answer in a binary. In reality, however, these weapons would not be able to make this distinction if the individual is not identifiable as a combatant through physical markings. It would also be easy for insurgents to trick robots by concealing weapons.

It would also be impossible to program a machine to have the human ability to assess an individual’s intentions; such a distinction is essential to determining targets. One of the most important ways to determine intention is to determine an individual’s emotional state, and this can only be done if the robot has emotions. A system devoid of emotion could not work effectively on a battlefield. Further, this lack of emotional reasoning hinders compliance with international law. Proportionality, one of the most important and complex rules of international law, dictates that the civilian harm of an attack cannot outweigh the military advantage of the attack. Determining the proportionality of an attack is subjective and requires intimate knowledge of the context. Some computer scientists argue that it is not possible for autonomous weapons to gain this knowledge because of software limitations, while others say that the quantity of information would be too overwhelming for just one “mind.”

For example, if a target were hiding in a heavily populated city, the weapon would have to decide whether to fire. Cars would be driving back and forth, children would be running to school, and buses full of people could be speeding by. All of this information, as mentioned above, could be overwhelming for the weapon. It would have to register this constant influx of data while simultaneously deciding whether to fire on a civilian neighborhood. Such a complex decision could only be made with an extremely complex algorithm, and many military strategists argue that the technology is not ready.

Finally, there is the question of accountability. It is inevitable that, if autonomous weapons are integrated into military arsenals, they will kill or injure civilians. When these casualties occur in normal armed conflict, it is important that someone is held accountable. This accountability deters future attacks against civilians through punishment or shaming of the guilty party. It also  provides retribution for those harmed by the attack. If the unlawful killing were executed by an autonomous weapon, it would not be immediately clear who is responsible. There are many people involved in sending out and supporting the autonomous weapon, but the final decision would have been made by the weapon itself. Since there is no way to hold inanimate objects accountable for their decisions, allowing them control over decisions to kill would eliminate an important tool for ensuring the protection of civilians.

In order to meet requirements set by international humanitarian law, weapons need to function by human in-the-loop systems. In addition to these arguments about the law, another important consideration is the much more abstract component of the dignity of human life. Outlawing autonomous weapons would alleviate threats to fundamental moral principles: Decisions to use force should be made with respect for the value of human life, and the power to make such a decision should come from someone with human experience. Weapons do not have the ability to apply past experience and moral principles to these situations, and therefore are ill-equipped to make decisions concerning life-or-death situations. The exercise of this judgment cannot be undertaken by a weapon, no matter the amount of data it possesses.

Humans can feel empathy, which acts as a necessary check on killing. Psychologists and engineers alike understand that weapons have no conception of their own mortality, and therefore cannot feel the weight of their actions in the same way humans do. Although the moral argument is much more difficult to quantify than the aforementioned breaches of international humanitarian law, it is the most important for governments to consider when making decisions about autonomous weapons. Inanimate machines can not possibly understand the value of human life, and thus should not be responsible for choices about who lives and who dies.

Photo

About the Author

Isabella Creatura '18 is a staff writer for the Brown Political Review.

SUGGESTED ARTICLES