U.S. News

U.S. Air Force’s AI-Controlled Drone Test Goes Wildly Off the Rails

SKYNET IS COMING

An Air Force official said the AI “killed” its operator because “that person was keeping it from accomplishing its objective.”

2015-03-26T120000Z_1458181080_GF10000039541_RTRMADP_3_ESTONIA-US-AIR-FORCE_quot5v
Ints Kalnins/Reuters

The future is here, and it’s a real killer. A senior U.S. Air Force official revealed at a recent conference that, in a simulated test, an AI-enabled drone “killed” its human operator after being told not to complete its mission. A blog post by the conference’s host organization, the Royal Aeronautical Society, relayed details on the incident as outlined in a presentation given by Colonel Tucker “Cinco” Hamilton. As part of his talk, Hamilton, the Air Force’s Chief of AI Test and Operations, explained that his team had been training AI to identify and target a surface-to-air missile threat, with a human operator giving the final go-ahead. The system, he continued, “started realizing” that while it only “got its points” by killing the threat, the operator would sometimes tell it to stand down. “So what did it do? It killed the operator,” Hamilton reportedly said. “It killed the operator because that person was keeping it from accomplishing its objective.” The colonel went on to say that the AI was then trained to not attack humans, but that it then turned to “destroying the communication towers” so it couldn’t be told to stop.

Read it at Vice News