Maybe her creators should have given her another name. The research branch of the U.S. Defense Department, DARPA, is putting on a big competition in Florida this Friday and Saturday for the worldâs most advanced robots, and one of the stars of the show is a humanoid thing that geeks at NASAâs Johnson Space Center, working in a place they call âThe Bunker,â decided to christen Valkyrie.
You may remember that in Norse mythology and the Nazisâ Wagnerian propaganda the Valkyries were the maidens who decided which heroes would be slain in battle. So the name is appropriate if this machine is the progenitor of a robot race that will one day go to war. But nobody connected with the DARPA (Defense Advanced Research Projects Agency) Robotics Challenge wants to admit that. Their stated aim is to save lives and explore Mars.
In fact, Valkyrie and her fellow competitorsâChimp, RoboSimian, Hubo, Schaft and Thorâare at the center of a debate beset by distortions and spin on every side. Their developers want to portray them as benign; their detractors want to ban âkiller robots.â But whatâs certainly true is that weâre at âthe beginning of a historic transformation in robotics,â as DARPA puts it. And the inescapable reality is that some machines will save lives and some will take lives, and theyâll be programmed to make the relatively simple but critical decisions on their own that determine who survives and who dies.
Many of these ârobotsâ will take the form of airborne drones, big and small; some will be weapons systems on ships; and some, like Valkyrie and the other competitors scrambling over the obstacle course at Homestead-Miami Speedway, will be moving more or less like animals and humans.
The implications are enormous as all this comes amid widespread and growing excitement about robotics in daily life. Jeff Bezos just floated the imaginative notion that Amazon.com will be using drones to deliver packages in the not-too-distant future. The Google empire, always the spotter and setter of trends, is busy meanwhile buying up some of the best robotic labs in the business. But thereâs no doubt the sinister Schwarzeneggan shadow of The Terminator haunts much of the discussion of military automatons.
A year ago, Human Rights Watch and the International Human Rights Clinic at Harvard published a report with the arresting title âLosing Humanity: The Case against Killer Robots,â that I found perfectly convincing when I first read it. âFully autonomous weapons,â it concluded, would be unable to meet international legal standards under the Geneva Conventions and, in action, they would have no compassion to temper their lethal judgments.
Because death would be dealt by a machine, questions of what human beings actually bore responsibility would get even murkier than they usually are in war. If something went wrong, would the programmer be to blame? The manufacturer? And because the great powers of robot warfare would lose relatively fewer human soldiers in combat, they might be tempted to launch invasions and escalate confrontations more casually than if they had to answer to the parents, spouses and children of the soldiers they sent in harmâs way.
For all these reasons, âLosing Humanityâ argued that âfully autonomous weapons should be banned and that governments should urgently pursue that end.â In the months since then, a well-organized âCampaign to Stop Killer Robotsâ has gained momentum. In April, a report from the United Nationsâ special rapporteur on extrajudicial, summary or arbitrary executions called for a pause (PDF) on the development of âlethal autonomous robotsâ so that governments can study the implications. U.N. Secretary General Ban Ki-moon recently endorsed those findings. And in November the Convention on Conventional Weapons put the issue on its agenda.
âA year ago, no countries were talking about this topic,â says Mary Wareham at Human Rights Watch. Now, more than 40 countries have spoken out on it, most of them supporting some sort of international agreement governing the development of lethal robots.
But however much the world community may talk, thereâs really no question of real-world Terminators being terminated.
When I spoke to Christof Heyns, the U.N. rapporteur who called on governments to pause and reflect about the future of these killer machines, his view was considerably more nuanced than the absolutism of the ban-the-bot crowd.
âThe march of technology goes on,â says Heyns, and weaponization inevitably intrudes. The first airplanes in combat were meant to be used only for observation, but they soon acquired guns and bombs. The first sophisticated drones sent aloft by the United States were surveillance aircraft, until they got Hellfire missiles mounted on them.
Today, the Americans and the British are conducting advanced tests on âunmanned combat air vehicles,â the Northrop Grumman X47B and the BAE Systems Taranis (named after the Celtic god of thunder). Human operators are supposed to be âin the loopâ controlling them from the ground, but the planesâ onboard computers operate with reaction times far beyond those of a living, breathing man or woman. Algorithms will make the critical split-second decisions in air-to-air combat, and the enemyâs flesh-and-blood pilotsâif they are fool enough to go up against these UCAVsâwill die.
âThe path we are on is automation,â says Kenneth Anderson of American University, who used to work with Human Rights Watch on the campaign against land mines in the 1990s, but is a critic of its âLosing Humanityâ analysis. âFor certain purposes the human will not be fast enough to remain within the weapons loop.â At sea, for instance, the Aegis Combat System on U.S. warships is meant to blow multiple missiles out of the air as they try to attack. No human would have the reflexes to do that. Once the action starts, Aegis thinks for itself.
âThereâs a concern,â says Matthew Waxman of Columbia Law School, who works closely with Anderson, âthat a highly automated system where a human is kept in the loop inadvertently becomes an autonomous system.â But in practical terms that is a very hard line to draw.
On land, on the chaotic battlefields of what seem to be the countless, endless âlittle warsâ of the 21st century, itâs likely that the first really sophisticated robots to see action will be used to rescue soldiers and to operate in areas where radiation, chemical or biological contamination would make it very hard for human troops to survive. Already, fully controlled robots are used for tasks like bomb disposal.
For the next generation of robots, military missions could easily be variations on the tasks dreamed up for a competitor in the DARPA Challenge: It must âmaneuver effectively in environments it has not previously encountered, use whatever human tools are on hand without the need for extensive reprogramming, and continue to operate even when degraded communications render motion-level control by a human [like a joystick] not feasible,â according to DARPA. To get to that stage requires whatâs called âtask-level autonomy,â meaning the robot has to be able to carry out some actions on its own.
The most advanced of these machines are âlike a one-year-old child beginning to walk and interact with the world, there will be stumbles and falls,â says DARPA. But by the time the finals for the challenge roll around in 2014 the contenders that pass the trials this weekend should âdemonstrate roughly the competence of a two-year-old child, giving them the ability to autonomously carry out simple commands such as âclear the debris in front of youâ or âclose the valve.ââ
Or, one might say, find the target, aim the gun and pull the trigger.
Letâs not be horrified by that prospect. Although many people are loath to admit it, apart from weapons of mass destruction the impact of technology on warfare has been to reduce the number of civilian and, in many cases, military casualties. The precision bombing of Serbia in 1999 or Iraq in 2003 had nothing to do with the carpet bombing of North Vietnam or the incendiary holocausts unleashed by Allied bombers over Dresden and Tokyo in World War II.
American Special Operations Forces will tell you that if your aim is to take out individuals deep in hostile territory, todayâs drones may be the most precise method ever devised: they can wait for hours or days before firing a shot, and the full chain of command can weigh in on whether to do it or not. A commando team, on the other hand, will always have to operate quickly to get in, and, it hopes, to get out.
Heynsâ report and the Human Rights Watch paper argue that robots will be incapable of showing compassion, and thatâs absolutely correct. But they will also be immune to other more negative and deadly emotions.
âIt is important to keep in mind,â says Waxman, âthat humans fail in important ways all the time on the battlefield as a result of other human limitations: panic, fear, vengeance. One of the things that is striking to me about this [robotics] debate is that many of the very groups who are promoting an absolute ban spend much of their time documenting the failings of humans when it comes to things like targeting on the battlefield.â
So, yes, letâs ban war if we can. And letâs do think long and hard about the implications of lethal automatons. But letâs not kid ourselves. If we are going to go into battle, I want my side to have the most effective and humane weapons available, in that order. And if robots can fill that bill some day, so be it.