U.S. News

The 'Killer Robot' Olympics

Enormous Implications

The world may discover some hard truths this week at the amazing DARPA Robotics Challenge, a competition for the world’s next-generation machines.

articles/2013/12/19/darpa-s-drone-olympics/131218-dickey-darpa-tease_zgk8rl

Maybe her creators should have given her another name. The research branch of the U.S. Defense Department, DARPA, is putting on a big competition in Florida this Friday and Saturday for the world’s most advanced robots, and one of the stars of the show is a humanoid thing that geeks at NASA’s Johnson Space Center, working in a place they call “The Bunker,” decided to christen Valkyrie.

You may remember that in Norse mythology and the Nazis’ Wagnerian propaganda the Valkyries were the maidens who decided which heroes would be slain in battle. So the name is appropriate if this machine is the progenitor of a robot race that will one day go to war. But nobody connected with the DARPA (Defense Advanced Research Projects Agency) Robotics Challenge wants to admit that. Their stated aim is to save lives and explore Mars.

In fact, Valkyrie and her fellow competitors—Chimp, RoboSimian, Hubo, Schaft and Thor—are at the center of a debate beset by distortions and spin on every side. Their developers want to portray them as benign; their detractors want to ban “killer robots.” But what’s certainly true is that we’re at “the beginning of a historic transformation in robotics,” as DARPA puts it. And the inescapable reality is that some machines will save lives and some will take lives, and they’ll be programmed to make the relatively simple but critical decisions on their own that determine who survives and who dies.

ADVERTISEMENT

Many of these “robots” will take the form of airborne drones, big and small; some will be weapons systems on ships; and some, like Valkyrie and the other competitors scrambling over the obstacle course at Homestead-Miami Speedway, will be moving more or less like animals and humans.

The implications are enormous as all this comes amid widespread and growing excitement about robotics in daily life. Jeff Bezos just floated the imaginative notion that Amazon.com will be using drones to deliver packages in the not-too-distant future. The Google empire, always the spotter and setter of trends, is busy meanwhile buying up some of the best robotic labs in the business. But there’s no doubt the sinister Schwarzeneggan shadow of The Terminator haunts much of the discussion of military automatons.

A year ago, Human Rights Watch and the International Human Rights Clinic at Harvard published a report with the arresting title “Losing Humanity: The Case against Killer Robots,” that I found perfectly convincing when I first read it. “Fully autonomous weapons,” it concluded, would be unable to meet international legal standards under the Geneva Conventions and, in action, they would have no compassion to temper their lethal judgments.

Because death would be dealt by a machine, questions of what human beings actually bore responsibility would get even murkier than they usually are in war. If something went wrong, would the programmer be to blame? The manufacturer? And because the great powers of robot warfare would lose relatively fewer human soldiers in combat, they might be tempted to launch invasions and escalate confrontations more casually than if they had to answer to the parents, spouses and children of the soldiers they sent in harm’s way.

For all these reasons, “Losing Humanity” argued that “fully autonomous weapons should be banned and that governments should urgently pursue that end.” In the months since then, a well-organized “Campaign to Stop Killer Robots” has gained momentum. In April, a report from the United Nations’ special rapporteur on extrajudicial, summary or arbitrary executions called for a pause (PDF) on the development of “lethal autonomous robots” so that governments can study the implications. U.N. Secretary General Ban Ki-moon recently endorsed those findings. And in November the Convention on Conventional Weapons put the issue on its agenda.

“A year ago, no countries were talking about this topic,” says Mary Wareham at Human Rights Watch. Now, more than 40 countries have spoken out on it, most of them supporting some sort of international agreement governing the development of lethal robots.

But however much the world community may talk, there’s really no question of real-world Terminators being terminated.

When I spoke to Christof Heyns, the U.N. rapporteur who called on governments to pause and reflect about the future of these killer machines, his view was considerably more nuanced than the absolutism of the ban-the-bot crowd.

“The march of technology goes on,” says Heyns, and weaponization inevitably intrudes. The first airplanes in combat were meant to be used only for observation, but they soon acquired guns and bombs. The first sophisticated drones sent aloft by the United States were surveillance aircraft, until they got Hellfire missiles mounted on them.

Today, the Americans and the British are conducting advanced tests on “unmanned combat air vehicles,” the Northrop Grumman X47B and the BAE Systems Taranis (named after the Celtic god of thunder). Human operators are supposed to be “in the loop” controlling them from the ground, but the planes’ onboard computers operate with reaction times far beyond those of a living, breathing man or woman. Algorithms will make the critical split-second decisions in air-to-air combat, and the enemy’s flesh-and-blood pilots—if they are fool enough to go up against these UCAVs—will die.

“The path we are on is automation,” says Kenneth Anderson of American University, who used to work with Human Rights Watch on the campaign against land mines in the 1990s, but is a critic of its “Losing Humanity” analysis. “For certain purposes the human will not be fast enough to remain within the weapons loop.” At sea, for instance, the Aegis Combat System on U.S. warships is meant to blow multiple missiles out of the air as they try to attack. No human would have the reflexes to do that. Once the action starts, Aegis thinks for itself.

“There’s a concern,” says Matthew Waxman of Columbia Law School, who works closely with Anderson, “that a highly automated system where a human is kept in the loop inadvertently becomes an autonomous system.” But in practical terms that is a very hard line to draw.

On land, on the chaotic battlefields of what seem to be the countless, endless “little wars” of the 21st century, it’s likely that the first really sophisticated robots to see action will be used to rescue soldiers and to operate in areas where radiation, chemical or biological contamination would make it very hard for human troops to survive. Already, fully controlled robots are used for tasks like bomb disposal.

For the next generation of robots, military missions could easily be variations on the tasks dreamed up for a competitor in the DARPA Challenge: It must “maneuver effectively in environments it has not previously encountered, use whatever human tools are on hand without the need for extensive reprogramming, and continue to operate even when degraded communications render motion-level control by a human [like a joystick] not feasible,” according to DARPA. To get to that stage requires what’s called “task-level autonomy,” meaning the robot has to be able to carry out some actions on its own.

The most advanced of these machines are “like a one-year-old child beginning to walk and interact with the world, there will be stumbles and falls,” says DARPA. But by the time the finals for the challenge roll around in 2014 the contenders that pass the trials this weekend should “demonstrate roughly the competence of a two-year-old child, giving them the ability to autonomously carry out simple commands such as ‘clear the debris in front of you’ or ‘close the valve.’”

Or, one might say, find the target, aim the gun and pull the trigger.

Let’s not be horrified by that prospect. Although many people are loath to admit it, apart from weapons of mass destruction the impact of technology on warfare has been to reduce the number of civilian and, in many cases, military casualties. The precision bombing of Serbia in 1999 or Iraq in 2003 had nothing to do with the carpet bombing of North Vietnam or the incendiary holocausts unleashed by Allied bombers over Dresden and Tokyo in World War II.

American Special Operations Forces will tell you that if your aim is to take out individuals deep in hostile territory, today’s drones may be the most precise method ever devised: they can wait for hours or days before firing a shot, and the full chain of command can weigh in on whether to do it or not. A commando team, on the other hand, will always have to operate quickly to get in, and, it hopes, to get out.

Heyns’ report and the Human Rights Watch paper argue that robots will be incapable of showing compassion, and that’s absolutely correct. But they will also be immune to other more negative and deadly emotions.

“It is important to keep in mind,” says Waxman, “that humans fail in important ways all the time on the battlefield as a result of other human limitations: panic, fear, vengeance. One of the things that is striking to me about this [robotics] debate is that many of the very groups who are promoting an absolute ban spend much of their time documenting the failings of humans when it comes to things like targeting on the battlefield.”

So, yes, let’s ban war if we can. And let’s do think long and hard about the implications of lethal automatons. But let’s not kid ourselves. If we are going to go into battle, I want my side to have the most effective and humane weapons available, in that order. And if robots can fill that bill some day, so be it.