Innovation

AI Won’t Just Replace TV and Movie Writers—It Will Make Pop Culture a Nightmare

CULTURE WARS

We rely on TV and movies to be a healthy distraction from day-to-day life. Handing the reins to AI is a mistake on so many levels.

opinion
A photo illustration of a movie projector reel projecting binary code.
Photo Illustration by The Daily Beast/Getty Images

“I’ve asked an AI to generate a trailer for a HEIDI movie and now I can never sleep again,” filmmaker and comic Patrick Karpiczenko posted to Twitter on July 10. The trailer starts innocently enough, with a young blonde girl dancing through Alpine mountains as she does in the Swiss children’s stories—even if her face looks like it’s melting. Cows and horses show up, and a man with a distinctly goat-like face nuzzles one of the horses just before the title displays: HRCLIC. (It’s actually hard to tell if these are even letters.)

Children laughing and oddly shaped animals come on screen, but then give way to townspeople with torches, shouting and screaming in a dark night, with the “Heidi” figure yelling in horror, a bucolic scene turned into a Nazi rally replete with Tiki torches. We blur-transition to dogs (?) and goats watching these proceedings in a modern movie theater, and suddenly: THE THE Eu UED. (You have to watch it to see what I mean.)

I slept fine after watching this, but it’s not exactly The Real Housewives of Orange County. When you get home from work, you’re not necessarily looking for the collective unconscious to wash over you to help you relax. Using machines to generate lowbrow culture is making things… uncanny.

ADVERTISEMENT

And yet that’s the future many fear is set to be foisted on us with the rise of AI. Earlier this month, 160,000 actors went on strike, joining more than 11,000 writers who have been picketing Hollywood Studios since early May. Fran Drescher, the president of the SAG-AFTRA union (though perhaps better known to all millennials as “the Nanny”), has attacked studio executives head on and argued that AI, among other things, is threatening culture workers. The worry is real. ChatGPT can imitate any genre, and tools like Gen-2 (which Karpiczenko used to make the Heidi trailer) are already offering off-the-shelf storyboarding.

But the present state of AI culture generation isn’t exactly ready to automate the whole film production process (or even run titles given in their prompts). What is called “generative AI” is a set of statistical prediction machines that jumble up pixels, words, and other data, remix them, and spit out something that’s meant to resemble whatever you ask for. OpenAI, which launched ChatGPT last year, also offers DALLE-2, one of the most popular image generators. You can tell this system in plain English to make a poster for the next Spiderman movie (I’ve lost count), and it’ll do it in seconds. Algorithms of this kind are called “text-conditional generative diffusion models,” because they use prompts in natural language and some fancy statistical techniques to generate images that correspond to something we want—or very much don’t want, as is often the case.

Generative AI isn’t yet making movies, exactly. It excels at a form of utterly bizarre lowbrow culture that’s genuinely weird, completely horrifying, and actually new. There’s a lot of talk of “innovation” in Silicon Valley, a term that usually means destroying things that work in favor of things that don’t. But generative AI systems really are tapping into something new.

What these systems produce is statistically “near” the stuff we say, or the things we tell them to do, or the images we feed them. This is why they are also pretty racist, reproducing and mashing up our biases. AI watchdogs Emily Bender (a computational linguist ) and Timnit Gebru (an AI engineer) have coined the phrase “stochastic parrots” to capture this phenomenon.

But this remix can also be put to creative ends. Generative AI uses statistics to scan “near” what we mean by “Spiderman” or “Heidi,” and it fetches images and text from that “nearby” region that ends up feeling totally alienating. The artist and critic Hito Steyerl has called these new images “mean images,” a play on the double meaning of that word as “average” and “cruel.” Mean genre videos, though, are just plain creepy.

AI Heidi isn’t alone. We’re entering a world where statistical methods play a direct role in the production of culture. Generative AI is already giving us glimpses of a partly automated entertainment industry, and it’s weird. Video “art” like the “pizza commercial,” the beer commercial where everything blows up, or even the much-circulated nightmare fuel of “Will Smith eating spaghetti,” seem like parodies of known genres, like the commercial, the trailer, and so on. But trailers should make us want to watch movies, and advertisements should attract us to products.

If art helps us transcend that reality—this is why we use the “high/low” spectrum in thinking about it—then AI culture is something else entirely.

More generally, lowbrow culture should be comforting, it should get us back to a “normal” reality, give us a sense that we are tethered to that reality. If art helps us transcend that reality—this is why we use the “high/low” spectrum in thinking about it—then AI culture is something else entirely. Rather than bathing us in a comforting stupidity, like the “boob tube” was always accused of doing for us ’90s kids, it seems to tap directly into the most violent and perverse aspects of our collective unconscious. The infinitely running “AI Seinfeld” was shut down—and later turned on again—when it became transphobic, echoing the same fate that befell Microsoft’s chatbot Tay, which learned, way back in 2016, to deny the Holocaust in mere hours. These aren’t the domesticated forms of violence we enjoy, like real crime podcasts or Law and Order Special Victims Unit. Generative AI is showing us something far more disturbing: our collective statistical unconscious.

Will Smith Eating Spaghetti gives us a feeling, a vibe, that’s just… off. Founder of psychoanalysis Sigmund Freud described this vibe in a 1919 essay as “uncanny.” Uncanny things return from the darkest corners of our psyche, from places we’ve repressed, he said. Our most violent, weirdest fears get expressed here. And we might even like it sometimes, like in Gothic horror or r/liminalspaces. But rather than severed limbs or the living dead, or even Romantic visions of automata, the uncanny vibe has shifted into an even weirder statistical mode.

What washes over us in the new genre of generative video art isn’t a repressed memory or belief, and it’s not horror exactly. Instead, it’s a loss of focus. Media scholar Roland Meyer argues that AI systems “turn clichés into nightmares.” The uncanny gets “diffused” onto us by these algorithms, giving us not a personal or even collective sense of our psyches, but a picture of real social patterns that should keep us up at night, after all.

Philosopher Slavoj Zizek once explained that The Sound of Music is racist. The movie itself is, of course, avowedly anti-fascist, telling the story of the Von Trapp family’s escape from Nazi Austria. But Zizek points out that the “Nazis” are elegant and cosmopolitan—matching common Nazi stereotypes of Jewish people—while the family is largely blond, good-valued, with a “small-is-beautiful” ethos that suggests the values of National Socialism more than the fictional Nazis themselves. Maybe this is why the movie has remained so popular, Zizek suggests in a Freudian twist: It tells us what we want to hear (we are good democratic citizens) while speaking subliminally to our crypto-desire for fascism.

Most “AI art” isn’t fully automated, and it doesn’t stand to be anytime soon. (This won’t help the strikers, though: they are fighting a real fight about a massive loss of jobs and wages that doesn’t depend on “full automation,” which is largely a fantasy anyway.) So when asked what he fed the algorithm to get it to spit out the deranged Heidi trailer, Karpiczenko responded in German: “our homeland + my youth = surreal fever-dream.”

It’s unclear what the prompt was, but it also isn’t the point. He and the others making the manual contributions to this genre are, if anything, amplifying the weird statistical effect. The result is a “diffusion” of pieces of our collective cultural knowledge regurgitated back at us by the algorithm. These are a bit like mashups, but with noise added, creating a statistical monster.

The new statistical uncanny isn’t just horror, although it’s clearly a related genre. The fascist turn that the Heidi trailer takes isn’t just a “secret desire” for violence. It’s more like even those parts of the world we throw into the “bad” basket—cruelty, crime, abuse of power, fascism—losing definition, becoming “diffuse.” Of course, we’re fascinated by this too, and maybe eventually we’ll find a way to be comforted by it, depend on it for our sense of reality, for relaxation after work. But for now, the statistical unconscious will trouble our sleep.

Got a tip? Send it to The Daily Beast here.