Opinion

The Social Media Panicmongers Have Pivoted to AI

APOCALYPSE, AGAIN

The science populists who stoked a Reefer Madness-like hysteria about social media are onto their next target—and predictably predicting the imminent end of the world.

opinion
Numerous scared faces in black and red gawk at a cell phone glowing with an AI software blinking
Photo Illustration by Luis G. Rendon/The Daily Beast/Getty

Artificial intelligence (AI) has in 2023 quickly eclipsed social media and smartphones as the technology du jour for secular doomsday preachers.

Concerns about content-ranking algorithms and “dark patterns” suddenly felt quaint compared to sentient AI exterminating (or displacing) every human on the planet. Mark Zuckerberg, once an unyielding digital titan, now feels like MySpace Tom reigning over an uncool, increasingly irrelevant virtual realm.

This sudden narrative shift posed a dilemma for a cottage industry of self-styled “tech-ethicists” who once effortlessly garnered book deals, headlines, and interviews on the topic of social media-induced societal collapse. Now, they find themselves outcompeted in the attention economy by the likes of AI safety researchers like Elizer Yudkowsky, who called for theoretical nuclear strikes on server farms in Time magazine, and Connor Leahy, who went on CNN to warn Christine Amanpour about the extinction of the human race. So, they hastily adjusted their messaging to compete.

ADVERTISEMENT

Roger McNamee, author of the book, Zucked: Waking Up to the Facebook Catastrophe, dismissed the positive potential of AI on CNN, only days after a New York Times report on AI helping a paralyzed man walk again. Facebook whistleblower Francis Haugen predicted 10 million deaths from social media while promoting her new book, The Power of One: How I Found the Strength to Tell the Truth and Why I Blew the Whistle on Facebook. The influential public intellectual Yuval Noah Harari, author of Sapiens: A Brief History of Humankind, went from calling free information “dangerous” to suggesting tech executives should face jail for allowing AI generated profiles.

All three signed their names to a March 2023 open letter—released by the Elon Musk-backed Future of Life Institute—which demanded a six-month hiatus on AI development.

Also among the letter’s signees was Tristan Harris, the photogenic poster boy of tech-ethicism who famously eschewed his six-figure Silicon Valley salary—after pushing for design ethics at Google—to start the Center for Humane Tech, where he led a crusade against smartphones and social media. Harris’ influence and profile has now risen to the point of being invited to a recent Senate meeting on AI, along with Bill Gates and Elon Musk.

A month after the letter’s publication, Harris and his team delivered a chilling presentation titled, “The AI Dilemma,” a reference to the social media-panic documentary, The Social Dilemma, of which they were also heavily involved. In the film, Harris claimed, “no one ever said this about the bicycle”—in regards to social media’s impact on society—which is an ahistorical statement, quickly (and ironically) fact-checked on social media.

Over a hundred influential leaders in media, government, philanthropy, and business gathered in New York City to hear Harris and his Center for Humane Technology co-founder, Aza Raskan, make a presentation which opened with a comparison of AI to nuclear weapons. This analogy was designed to evoke fear, leaving no room for nuanced discussions of risk and benefit (nuclear technology is also a critical tool for carbon free energy, for example.) This notion was seemingly borrowed from Yuval Noah Harari, whose quote, “What nukes are to the physical world… AI is to everything else,” was also featured later in the presentation.

It was the kind of rhetoric worthy of people the neuroscientist Darshana Narayanan has labeled “science populists”—a group she defined as “gifted storytellers who weave sensationalist yarns around scientific ‘facts’ in simple, emotionally persuasive language.” Narayana’s piece focused on Harari, but could have just as easily been about Harris.

Harris and Raskan’s presentation fit this description throughout and, thankfully, at least two veteran journalists in the audience saw through it. Steven Levy, editor in chief of Wired magazine, wrote a scathing article in March 2023 titled, “How to Start an AI Panic,” laying into their populist, sensationalist rhetoric—and comparing it to the Reefer Madness tone of The Social Dilemma.

In a post-presentation interview with Harris, veteran tech reporter Kara Swisher asked about the claim that “50 percent of AI researchers predict a 10 percent chance of extinction,” noting that the statistics were drawn from “a non-peer reviewed survey, a single question one, with around 150 responses.”

Harris retorted: “Don’t trust one survey.”

That’s good advice, considering only 3 percent of 4,271 AI experts contacted for the survey (and 20 percent of the respondents) answered the question at all.

And yet, weeks later Harris co-authored a New York Times op-ed that revolved around that aforementioned single survey, turning it into a thought experiment: “Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it.” The piece wove a sensationalist yarn around supposed facts in simple, emotionally persuasive language. If that sounds familiar it might be because one of its co-authors was Yuval Noah Harari.

The piece was called “‘bananas’” by veteran tech writer Charlie Warzel of The Atlantic, and “ridiculous” by a former editor of The Verge. Computer science professor and influential voice on AI risk, Noah Giansiracusa, said the analogy was “wildly misleading.” Even author, tech-researcher, and self-identified neo-luddite Jathan Sadowski described it as “three idiots telling scary sci-fi stories about AI with flashlights under their chins.” Nevertheless, it was shared widely and uncritically by some media heavyweights, like The Atlantic’s Anne Applebaum and MSNBC’s Mehdi Hasan.

The unbelievable prospect that crusaders against populist misinformation could be themselves guilty of perpetuating populist misinformation has seemingly insulated Harris, his organization, and his contemporaries from meaningful critiques in the mainstream media. But once you notice it, it can’t be ignored and, in retrospect, seems obvious.

Harris’ anti-tech campaigning began before he quit Google, with an internal presentation on design ethics in which he cited the condition “Email Apnea,” as if it were medical consensus. He also alleged, inexplicably, reading one email causes “our liver to dump glucose and cholesterol into our blood.”

After he ended his Google career, Harris brought this type of rhetoric to mainstream outlets like Fox News, CNN, and MSNBC, as well as the massively popular podcast The Joe Rogan Experience, where he talked of the Chinese Communist Party (CCP) as a benevolent content curator for China’s youth for showing them “educational and patriotism videos” that made them “want to become astronauts instead of influencers.” He called this CCP-generated content “Spinach TikTok,” apparently because spinach is good for you, as opposed to “Opium TikTok,” which kids in Western democracies are being fed, in Harris’ view. (He made the same claim on 60 Minutes, too.)

Harris even boasted to Rogan that it was “quite literally as if Xi Jinping saw The Social Dilemma,” seemingly oblivious to the fact that China’s notorious one child policy was influenced by another Western public intellectual’s panicmongering, that being Paul Ehlrich, author of The Population Bomb, who predicted mass starvation by the 1970s. (Underpopulation is the far greater concern in both China and the West, these days.)

In May 2023, Harris appeared on right-wing commentator Glenn Beck’s podcast. Beck opened the show noting that Geoffrey Hinton—the “godfather of AI” who dramatically quit Google recently due to AI concerns—had, unlike Harris, refused to come on his show. In the interview, Harris likened consumer-facing AIs (such as ChatGPT) to “gain-of-function” research and “the one ring” from The Lord of the Rings, to which Beck would suggest “Throw it in the volcano.” (Harris heartily agreed, repeating the suggestion.)

This faustian bargain would bring “unbelievable benefits” on the road “to our annihilation,” Harris told Beck. One example he offered was a cure for cancer, something he noted could have saved his late mother’s life. But had AI offered a cure, that would also release “some demon into the world,” Harris declared he wouldn’t have made the trade. If there was any doubt this was more than a hypothetical to Harris, when Kara Swisher likened it to “one of those dinner party questions” in another interview, he retorted, “But it's real.” This indicates a dangerous and extreme level of surety that is ignorant of history.

It is this kind of utilitarian anti-modernity mindset that has caused unconscionable harm over the past half-century. From the catastrophic obstruction of nuclear power development and resulting increase in carbon emissions, to the overpopulation panic and subsequent forced sterilizations in China, to the scientific populist crusade against GMOs and the millions of developing world children that have lost their sight and lives as a result—the dangers of over-precaution and scientific populism are undeniable.

But Harris and his compatriots in panic are blinded by ideology, so preoccupied with whether or not we should accelerate technological development, they haven’t stopped to think of the consequences if we don’t.

Got a tip? Send it to The Daily Beast here.