Nick Bostrom—father of modern debates about existential risk from artificial intelligence—is worrying he may have yelled “Terminator!” in a crowded theater, creating a neo-Luddite stampede that could “ruin the future” by plunging the world into a dark age of technological stagnation.
Bostrom is right to worry. The risk of technological stagnation—despite what shows like Black Mirror imply—presents a real dystopian danger; overstated risks of biotechnology exacerbated malnutrition in developing nations killing millions, while more recently contributing to unease about COVID vaccines. A half-century of opposition to nuclear power has reduced the world’s carbon-free energy generation capacity by catastrophic levels, and continues to see nuclear power plants closed and replaced with fossil-fuel energy generation.
Twenty years ago, Bostrom established Oxford's Future of Humanity Institute to explore existential risks to civilization—including AI. Since then he has become the leading public intellectual about AI risk, penning a book about the risks of the emerging technology followed by a TEDTalk. Now, his once esoteric concerns about AI's existential risks have attracted global mainstream media attention as the technology has accelerated, influencing President Joe Biden’s executive order, U.K. Prime Minister Rishi Sunak’s AI safety summit, and the rhetoric around the impending EU AI Act.
ADVERTISEMENT
Consequently, Bostrom—like Oppenheimer—has started to worry he might have opened Pandora’s box. In a recent podcast interview, he said that, while he felt concern about AI was still too low (but close to optimum levels), he had another concern: The panic that has emerged from alarmist rhetoric he fueled may be “like a big wrecking ball” that could lead to a “social stampede to say negative things about AI.”
“It would be tragic if we never developed advanced artificial intelligence,” he added.
While still unlikely, he said it was more likely than ever and if the concerns keep growing at the current pace, it could lead to outright AI prohibition and the world saying, “Well, let’s wait for a thousand years before we do that.”
To Bostrom this risk is less likely than that of malevolent AI turning against humanity—a determination that seems ignorant of history. The risks of super-intelligent AI are purely theoretical and fictional; it’s nothing more than a popular sci-fi trope.
Meanwhile, we’ve seen what doomsaying about scientific innovation can cause, like limited nuclear energy and genetic engineering. These fears have created a more dangerous and dystopian world.
Nevertheless, it was refreshing to hear Bostrom offer a counter-narrative to alarmism amongst a chorus of doomsaying from his contemporaries such as Eliezer Yudkowsky, who suggested airstrikes on “rogue datacenters” in Time magazine as a theoretical option for enforcing a international treaty to slow down AI development. Or Tristan Harris, the co-founder and executive director of the Center for Humane Technology, who insisted on Glenn Beck’s podcast that while strong AI would cure cancer, it’d wipe out humanity as an encore.
Unlike Bostrom, Harris not only failed to consider the risk of depriving the world of a cancer cure through over-precaution, he floated a morbid hypothetical: If powerful AI had offered his late mother life-saving treatment, but it was also malevolent and destructive, he would rather there was no treatment at all.
The problem with extinction narratives is that almost everything is worth sacrificing to avoid it—even your own mother.
Or, in Bostrom’s case, our freedom. In 2019 he floated the prospect of mass surveillance to prevent dangerous AI development via a wearable necklace covered in cameras that he joked could be branded the “freedom tag,” to reduce public opposition. Ironically, these proposals to save us from dystopia would make a great plot for an episode of Black Mirror.
Let’s not take sci-fi tales of dystopia so seriously we panic and make them a reality.