unmanned aircraft
AP/Peter Mitchell
An unmanned air surveillance vehicle is set to be launched high above the China Lake Naval
Air Weapons military facility, in an exercise to test military technology.

US Predicts Killer Robots 40 Years Away, Raises Ethics Debate

August 04, 2009 06:00 PM
by Haley A. Lovett
A U.S. Air Force report claims that by 2047 unmanned aircraft will be able to determine whether or not to strike a target—how will these killer robots change the face of war?

Report Calls for Debate About Rules of Unmanned Combat

The way we fight wars has changed dramatically over the last 50 years, and in about the same amount of time, predicts the U.S. Air Force in its “Unmanned Aircraft Systems Flight Plan 2009-2047,” we’ll be fighting with far less human involvement.

Unmanned aerial vehicles (UAVs) are already commonplace in the military; the devices, much like expensive remote control planes, are flown by pilots thousands of miles away. Professor Noel Sharkey told the BBC that these UAVs were involved in around 60 attacks in Pakistan over the last three years, and resulted in nearly 10 times that number of civilian deaths. Sharkey also pointed out that for the operators, the effects of war are very different than they’d be for someone sitting in the pilot’s seat.

According to the report, by 2047 artificial intelligence aboard unmanned aircraft will be able evaluate combat situations and make decisions about whether or not to strike, while abiding by combat rules. The Air Force also foresees using unmanned aircraft drones to provide backup for manned missions, to help protect real-life pilots in combat, according to Michael Cooney of Computer World.

There are many obstacles to surmount before unmanned warfare becomes a reality. For one, the technology; Darren Murph of engadget points out that nowhere in the report does it mention how exactly the Air Force will create these artificially intelligent machines. Also, Cooney points out, the security of having an unmanned craft, vulnerable to potential radio interference or takeover, is an issue. Perhaps most important, points out a few sources, is the ethical dilemma of letting machines make decisions about who and when to kill.

Killer Robots and the Three Laws of Robotics

Science fiction legend Isaac Asimov predicted a world in which many robot-like machines would be an integral part of human life. Asimov also foresaw the creation of human-like robots capable of thought, such as the artificially intelligent machines militaries are trying to develop today. In his short story collection “I, Robot,” Asimov introduces the Three Laws of Robotics:

“1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

“2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

“3. A robot must protect its own existence, except where such protection would conflict with the First or Second Law.”

Creating fully autonomous killer robots would be in direct conflict with Asimov’s first law, however, Asimov’s “laws” have never been programmed into any computers or robots, and as Robert J. Sawyer points out, are unlikely to be programmed into any computers or robots in the future.

Most Recent Beyond The Headlines