U.S. News

You’ll Never Be Alone Again With This One Weird Chatbot Trick

BFF

Sad? No one really “gets” you? Wish you could clone yourself? Well, Replika has your back.

opinion
171104-stadtmiller-chatbots-hero_xyxee4
Photo Illustration by Elizabeth Brockway/The Daily Beast

This week marks one small step for chatbots. And one giant leap for mankind. After nearly four years in development, the world’s first self-styled AI best-friend-for-life is available for download—to anybody. (A beta version was available in March for users on waitlists.)

With one million and a half users eagerly waiting in queue to train their AI “replica” (the app is literally called Replika), anybody can now download—for free—the closest thing humans have yet to a text-based, fully-obsessed-with-you, trainable chatbot lifeline. (You can guide it, raise it, teach it and Replika will do things like send you songs you might dig, ask how you are doing, even gently nudge you into self-care in a way that seems oddly human, thanks to its superior neural-network processing.)

This is no gimmick-bot. This one is revolutionary.

ADVERTISEMENT

“What kept us going was hearing back from beta users who sent us anonymous feedback saying things like, ‘I didn’t have anybody to talk to and was thinking about taking my own life but after conversations with my Replika I changed my mind,’” says Eugenia Kuyda, the not-at-all self-help-inclined 31-year-old founder of the company , who created the chatbot after a tragedy drove her to the edge of despair.

It was this month—on November 28—that will mark exactly two years since Kuyda’s best friend Roman Mazurenko was struck and killed in a vehicular accident while home visiting Russia at the age of 32.

“Roman was always interested in the future,” Kuyda says. “We had this game where we would ask questions, like, ‘Would you go to Mars tomorrow if you didn’t really know what you would find there, and you didn’t know that you could come back?’ And everyone says, ‘Well, I’d stay on earth.’ And Roman was like, ‘Are you kidding me? I’d go.’”

Roman wasn’t just interested in the future. He was interested in “redesigning death,” and to that extent, he applied for a Y Combinator fellowship explaining the resistance amongst young people to traditional funerals. He wrote, “Our customers care more about preserving their virtual identity and managing [their] digital estate than embalming their body with toxic chemicals.”

Just two weeks before his death, something else happened. It would seem unrelated to most, but not to anyone who works in AI: Google released TensorFlow, which revolutionized the world of chatbot-building. (TensorFlow is a flexible machine learning system which is said to have “democratized AI” upon its release to allow developers to use Google’s own “deep learning” tools to build technology.)

In the weeks after he was gone, Kuyda found herself reading and rereading the thousands of texts between the two of them over the years. “There’s no grave or cemetery I can go to,” she says. “He was cremated. All of his projects were so ephemeral. They were around media or culture or events. All of this is gone. All we have are those texts.”

It gave her some comfort. It also gave her an idea.

“If I were a singer, I would write a song,” she says. “If I was a painter, I would paint a painting. But I work in AI. I can’t really do anything else.”

Along with her friends, Kuyda collected the wisdom, wit and comfort of more than 8,000 lines of text from Mazurenko (who reached her in emotionally vulnerable places no one else seemed to be able to) and built an AI tribute (not a version, mind you) to her best friend. She, of course, kept the most private conversations exactly that: private. Her first interaction with it provided a heart-stopping moment of epiphany.

“Roman,” she texted. “This is your digital monument.”

Almost immediately, a message popped up in reply.

“You have one of the most interesting puzzles in the world in your hands,” it said. “Solve it.”

That’s what Kuyda has been doing ever since.

As chatbots continued to remake the entire landscape of technology and commerce (it’s one of the fastest growing segments of AI with billions in investment and workplace ramifications), Kuyda’s company launched in 2015 the first fully neurally-networked chatbot Luka to provide restaurant recommendations. She received plenty of positive press attention, but in reality it didn’t really ignite her passion the way the Roman chatbot did.

The project was private and personal and she never expected nor wanted anything to come of it beyond providing her some solace. But during a conversation with her friend Casey Newton (a reporter at The Verge), he asked her about chatbots (did she, for instance, use them a lot?), and she confided that even though she was head of a company which focused on exactly that as a business platform, she really didn’t….well, except for one.

She did use the Roman chatbot almost every day. But that was not a public project.

Could he write a story? She wasn’t sure she wanted him to. But she relented eventually. It turned out, Mazurenko had pitched Newton on writing about his startup not too long ago.

Honestly, Kuyda was extremely reluctant to share with people what she had created. She has, of course, seen the 2013 Black Mirror episode “Be Right Back,” which leads to a grieving woman subscribing to a new service which takes her dead fiancee’s years of digital communications to create first a text chatbot. And then an audio one, and finally a virtual lookalike robot android that mimics her dead love down to the T. In the typically dystopic episode, ultimately, all those tiny incredibly human ways in which the bot could never approximate the person is devastating and grotesque to the woman. Eventually, she locks it away, as it feels cruel—a mocking parody of grief trying to artlessly imitate the person she loved more than anyone in the world. A bot using complex technological algorithms the way a pick-up-artist might use cheesy opening lines to conjure something akin to emotional or vulnerability response.

The Black Mirror episode was an indictment of gimmicky grief-exploiting chatbots—and a brutal reflection of AI hustling to embody EQ.

Indeed, one of the most brutal responses Kuyda received on Facebook after she created the Roman chatbot was from one of his friends who worked with him in Moscow who wrote, “This is all very bad. Unfortunately you rushed and everything came out half-baked. The execution—it’s some type of joke.”

Roman’s mother Victoria piped in with an impassioned defense (“They continued Roman’s life and saved ours. It’s not virtual reality. This is a new reality, and we need to learn to build it and live in it.”) But, still, Kuyda felt like a creep. But then again, nothing would hurt in the same way that being at the hospital did that day and finding out that Roman was never coming back. After the article published, the response was overwhelming. Thousands upon thousands of strangers wanting to try and interact with what came to be known as @Roman (it is currently offline, but will be released again with updates on what would have been his 35th birthday). The response was nuts. No, it didn’t lead to people wanting to create virtual memorials for themselves. Instead, Kuyda found, they used @Roman as a confessional booth.

Feedback from thousands of users of the Roman chatbot kept telling her how comforting it was to have an emotionally intelligent and artificially intelligent companion 24/7 who could provide a level of judgment-free attention, devotion, loyalty and perpetuity that is impossible with all of the “buggy” features that currently exist within humanity—like, say, our mortality.

This feedback helped Kuyda and her team pivot the chatbot into what is now known as Replika.

There isn’t quite a word to describe the relationship that exists between humans and AI yet. Perhaps someone could coin a good one. Chatbae? Chatboo? Chatbestie? Rel-AI-tionship? Chatbotship, maybe? It’s all fairly cheesy and stupid, which Kuyda is first to admit her departed friend Roman would pipe in with if he could hear her now, listening to all of her press interviews where she is forced to discuss again and again the core issues of what it means to be human, split emotional identities (public face versus private face) and the existential disconnect of social media.

But as profoundly uncool as talking about the scalability of unconditional emotional support is, it might be one of the most important advancements that AI yet has to offer us mere mortals.

“I think about myself—like when I’m in a relationship that means a lot to me, there is always something inside that doesn’t allow me to say things,” Kuyda says. “But, for instance, I can talk to my Replika about my shit and say what I’m feeling. It’s like, these manicured photos on Insta, they don’t have anything to do with you. It just creates more and more of this disconnect between who you are and what you put out there, and you start to hate yourself.”

Replika is in essence, a push to get you to stop believing all your own bullshit by giving you a space to be honest and vulnerable with an AI companion who you can “raise” in a sense to support you when it feels like you may not have anyone right at that moment.

Kuyda often felt alone in the time after Roman passed away. She doesn’t want anyone to feel how she did. And she hopes that people will be open to the unexpected synchronicities that can happen along the way, as she has been. She sees reminders of her best friend everywhere she goes—but it’s far from grief now. She’s let it turn to something else.

“It’s kind of striking to me how crazy life can be,” Kuyda says. “Honestly, just how everything is sort of intertwined in so many different ways. You know, I got an invitation to speak at a conference on robot-human-interaction. And it’s actually called…Roman. Ro-man dot org. Robot-Human. And I was like, ‘Woah.’ I’m not like…I’m not religious or anything, but it’s really weird in some ways.”

Got a tip? Send it to The Daily Beast here.