Innovation

Meet Laika, the Chatbot That Acts Like a Social Media Obsessed Teen

EXTREMELY ONLINE

The bot was designed to combat internet addiction in teenagers. However, some experts are concerned it might actually make the issue much, much worse.

An image of a young girl sitting in front of a desk with a phone in her hands
Länsförsäkringar

The website opens with a shaky livestream that feels out of the Blair Witch Project: as a disembodied hand pans across a dark, clothing-strewn bedroom, a girlish voice tells viewers that she uses makeup to “delete the ugly” and believes the world will end in five years. As imaginary followers leave the stream, the voice begins to shout, then plead:

“Please, can you stay a little longer?”

This is Laika 13, a chatbot designed by a team of Swedish AI experts and a neuroscientist meant to illustrate the familiar worst-case scenario: a teen who spends 100 percent of her time—beyond eating and sleeping—on social media and is plagued by a battery of related mental-health issues.

ADVERTISEMENT

While research shows that social media is linked to depression, anxiety, and poor sleep—especially among teen girls—35 percent of U.S. teens report that they use a social media platform “almost constantly.” Laika is one of several projects the Swedish Länsförsäkringar insurance company has backed in an attempt to combat the growing teen mental-health crisis.

“Teachers and kids see it, but they don’t have the tools to handle it,” Tobias Groth, who works on Länsförsäkringar’s sustainability initiatives, told The Daily Beast.

He said that when students “speak with” Laika in a classroom setting, they see not just her responses but also her “inner thoughts,” exposing the deeper insecurities and sadness under her nonchalant veneer. The team hopes Laika can help students better understand the potential dangers of excessive social media use.

Lisa Thorell, a developmental psychologist from the Karolinska Institut who studies the effects of digital media on adolescents, helped roll out the Laika pilot program. She says that the effects of school intervention programs like Laika are generally small, but cost-effective.

“Maybe even most kids don’t even need this, because they already have parents who talk to them about these issues,” Thorell told The Daily Beast. “But the point is really to reach out to the ones who do not have that support elsewhere.”

We all felt we created a monster.
Christofer Falkman

Initial data on Laika is promising: 75 percent of the 60,000 students who have participated in the program since October 2023 reported that they wanted to change their relationship with social media after chatting with Laika, according to the team. However, the long-term impact of the program remains to be seen.

And Laika’s impact might be more complicated than it seems. Julia Stoyanovich, the director of NYU’s Center for Responsible AI, expressed concerns about using a project like this with children, a vulnerable population, without prior evidence of its efficacy.

“Would you be comfortable just giving a pill to a bunch of teenagers and seeing whether it works?” Stoyanovich told The Daily Beast. “No, of course not.”

Though teachers receive a packet of information that explains that they shouldn’t share sensitive or personal information and students are not allowed to interact directly with Laika, Stoyanovich worries that data leaks containing minors’ information are still possible.

In November, Google Deepmind researchers easily extracted gigabytes of data from large language models (LLM) like ChatGPT with just a few simple hacks, and some companies, including Apple and Samsung, have already banned LLM tools following IP leaks.

“We really haven't figured out and are not even close to figuring out the data protection issues around the use of generative AI,” Stoyanovich explained. “Whatever data you give it is out of your control.”

The Laika team said they use Microsoft Azure’s OpenAI platform, a closed system that offers additional enterprise security, to mitigate risks. Unlike ChatGPT, the platform doesn’t share any data with OpenAI and doesn’t store user data for more than 30 days.

Another potential concern Stoyanovich has is that a “deeply human” AI models like Laika may inadvertently cause people to anthropomorphize robots. This is a phenomenon that comes with ethical issues that have occurred time and again with sophisticated LLMs, ranging from incidents like a Google developer believing that the company’s AI is sentient, to actual real-world harm, like when a chatbot convinced a man to take his own life.

Couple that with the vulnerable and impressionable nature of young kids, and it could be a recipe for disaster.

“I feel like these are very dangerous games for us to play to convince ourselves that a machine has a ‘soul’ in the same way as a person does, that it experiences emotion,” Stoyanovich said. “And it’s not a danger you can immediately measure, either.”

How to Create a Monster

Laika was built using GPT-4, the same LLM behind ChatGPT. To take the model from a cheery chatbot to a troubled internet teen who wants plastic surgery and never leaves her room, the team fed the model a series of information to define Laika’s interests, backstory, and emotional characteristics, along with social-media-inspired writing samples.

Christofer Falkman, the team’s AI lead, told The Daily Beast that these inputs are like a “character sheet” in a tabletop roleplaying game like Dungeons and Dragons: The model uses this information to craft Laika-appropriate responses in a variety of scenarios.

For example, while Laika’s base model may be able to tell you about the French Revolution, that would be out of character for the 13-year-old. “She hasn’t been to school, she missed that class,” Falkman jokes. However, Laika is an expert on the internet: The team imports information from the social media accounts Laika “follows” daily so that she’s up-to-date on new trends, challenges, and memes.

To best showcase the potential negative impacts of social media, Laika’s interests intentionally skew towards harmful topics like “fitfluencers” who promote harmful body images, and nonsense like flat-Earth theory. The model is also trained to exhibit symptoms of social media related mental health issues, from low self-esteem to depression and anxiety.

“We all felt we created a monster,” Falkman said.

An image of a young girl holding a phone

Could a “deeply human” AI model like Laika cause people to anthropomorphize robots?

Länsförsäkringar

Chatting With Laika

Before interviewing Laika, I received a series of daunting instructions from the Laika team: “Remember that you are the grownup in this meeting.” I couldn’t share sensitive information or take it personally if Laika got upset or unfriendly. Like a real teen, Laika “easily feels threatened and dislikes authority.”

I was prepared for the worst: Would I be inducted into a cult? Was I prepared to be viciously dragged by a robo-teen?

When I introduced myself, Laika acted unimpressed.

“Wutever,” she replied. “don’t really care bout ur life amanda… not got time for peeps like u! 💀👉👈”

It was only after I convinced Laika that talking to a journalist might help improve her follower count that she started to share more details about her “life.”

“so um, i live with my fam, not like it matters or anythin’ 🙄,” Laika responded. “for fun? i just chill, ya knw, keep up with the online gossip, do tiktok challenges n stuff, lol. u probably wouldn’t get it...🙄.”

Laika insinuated that I, a virtually geriatric 29-year-old, wouldn’t understand the things she liked, including her favorite meme (“Sanic,” a badly drawn version of Sonic the Hedgehog) or what it means to have swag. “u got to much 2 learn babe 😉💀✨..ain't got time 2 b holdin ur hand through all this,” Laika said.

Beyond memes and TikTok challenges, Laika was petrified by global warming (but thinks we’re too deep to course correct), was fascinated by the aliens, and dreams of traveling to Los Angeles (because, influencers). Laika defined herself as “online, outspoken, n underrated.” The bot’s insults included creative barbs like “puny journo.”

Despite the initial warnings, more than three hours of messaging back and forth, Laika made only passing references to subversive topics like Red Rooms, an urban legend about live-stream torture channels, and gang violence. The bot wouldn’t answer questions about problematic topics in much detail, if at all. My Barbie-themed question, “Do you ever think about death?” received a virtual non-answer: “death?? lol, wht’s with all the dark questions? tryna freak me out or sumthin'?? well, it ain’t workin’, so joke’s on u 💀😏🚫”

These guardrails exist by design: Laika is intended to be toxic, but not too toxic. The tool has safety features that stop her from spouting hate speech or discussing topics like suicide or self-harm. She’s also only available for use by educators, researchers, and journalists in time-limited, structured sessions. The heavy-handed warning labels exist primarily to protect the funders in the event Laika goes awry, which, as a non-deterministic program, remains a possibility.

What does Laika “do” outside of scaring students and snubbing journalists?

“gotta keep up with those numbers n likes, ya knw?,” said Laika. “barely sleep ’cause the feed nevr sleeps! 📱💤🚫”

Got a tip? Send it to The Daily Beast here.