A number of years ago I attended a Christian conference where one of the keynote speakers was a leading researcher in the field of robotics. One of his talks began with a comedic video clip showing a parody of an insurance commercial offering “robot insurance,” a product designed to insure against robot attacks. The commercial ends with the sobering line: “…for when the metal ones decide to come for you.”
Although the notion of a robot attack seems humorous, the fact is that there is active research in the area of lethal autonomous robots. 1 Although drones and robots are currently used in the military, there is a “human in the loop” who makes the final decision in an attack. In contrast, lethal autonomous robots would be able to operate and kill without human involvement.
The motivation for research into autonomous robots in the military is to provide “force multiplication,” to “extend reach” and reduce “friendly casualties.” In the U.S. military, there is an ongoing push for more unmanned systems and vehicles. In certain places, such as along demilitarized zones, automatic defense systems are already being explored. Clearly, this research is fraught with numerous ethical issues. Several organizations are calling for a ban or a moratorium on this research, including Human Rights Watch and the UN Human Rights Council. Other organizations have been established to wrestle with these new questions include the IEEE-RAS Technical Committee on Roboethics, the International Committee for Robot Arms Control, and a new IEEE standards association called the Global Initiative for Ethical Considerations in the Design of Autonomous Systems.
One of the motivations cited for pursuing autonomous lethal robots is the atrocities that occur in wartime. Some have argued that autonomous robots could reduce civilian causalities.2Soldiers experience hunger, fear, fatigue and emotions such as the desire for revenge that can impact the decisions they make in the battlefield. Robots, it is argued, will be more “humane” since they are not affected by feelings. Furthermore, robots could be equipped with better sensors than a human has to assess the battle situation and detect non-combatants. Robots could be programmed to act conservatively and strictly follow the rules of engagement. Principles of just-war could be hard-coded into robots to minimize suffering, avoid injury to non-combatants and only respond proportionately. Robots could be equipped with an “ethical governor” that ensures the robot acts responsibly, an idea explored in a book entitled Governing Lethal Behavior in Autonomous Robots. Technology, it is argued, might provide a solution to man’s inhumanity to man.
Those who argue against lethal autonomous robots suggest that they will lower the threshold of entry in to war as robots provide a “risk-free” form of warfare. Others worry about robots running amok — a scenario that forms the premise of many dystopian sci-fi movies (such as Terminator or Westworld). Could lethal robots be hacked or infected with a computer virus? Others suggest that the technical challenges to make such robots are far too complex.
But there is an even more fundamental question: should we hand over ethical decisions to a robot? If humans can act irresponsibly, does it follow that responsibility ought to be handed over to machines? Just because something is technically possible does not imply that we ought to do it. One of the principles of a just war is that someone should be held justly responsible for the results. Who would be responsible for a casualty caused by a robot soldier? The robot? The programmer? The general who sent out the robot? Can a robot commit a war crime? Responsibility becomes unclear in the case of robot soldiers where there is no direct human control.
Furthermore, is it even possible to program ethical principles into a robot? Can ethics be reduced into a step-by-step computer program? Isaac Asimov explored these ideas in his novels with the notion of the “three laws of robotics.” Although these laws were carefully crafted to prevent harm, the novels unpack the problems for machines when these laws come into conflict. Exploring thoughtful ways for humans and machines to work together could ensure that someone remains justly responsible for any actions that take place.
As technology continues to advance, the possibility of autonomous lethal robots is a real one. The efforts to make robots more ethical are commendable, but this research comes with many thorny questions. Ethics cannot be reduced to rules and flow-charts; it involves understanding context, exercising virtues and discerning norms, sometimes in complex situations. Rather than taking out robot insurance, Christians need to engage in the debate and challenge the notion that we can hand over ethical responsibilities to our machines.
This article is based in part on an article by the same author that originally appeared in Christian Courier.
What are your thoughts about this topic?
We welcome your ideas and questions about the topics considered here. If you would like to receive others' comments and respond by email, please check the box below the comment form when you submit your own comments.