On its face, it’s not that strange that James Vlahos can talk to his father, John, any time he wants to on the phone. He can ask him about his favorite sports team when he’s curious. Or when he wants to ask him what his favorite song is. When the feeling strikes, Vlahos can just shoot his dad a message to see how he’s doing. Whatever it is, his dad is always quick to respond—sometimes with a straightforward answer, and sometimes with a joke.
But here’s the thing: Vlahos’s dad is dead. He passed away in February 2017, due to stage-4 lung cancer. The person Vlahos can chat with today on his phone isn’t his father exactly. It’s an AI chatbot that father and son created after the family learned about John’s terminal diagnosis.
“We knew we were going to lose and were scrambling to find ways to remember him,” Vlahos told The Daily Beast. “Meanwhile, I was working on a book about conversational AI, so I was learning about all of these ways that we can teach computers to talk in human-like ways.”
ADVERTISEMENT
Vlahos came to the idea rather suddenly. Instead of simply recording John’s memories and stories on audio or video, he could have a more interactive way to revisit his father’s memories and personality through the same AI technology he was already exploring. “That was what gave me this idea that I could make this memory-sharing chatbot that I came to call Dadbot,” Vlahos said.
He went on to pen a story about Dadbot for Wired in 2017. Then word started to spread. Emails and calls came in from people from all over the world who were dying or who had dying loved ones that wanted to create similar chatbots of their own. Would you make a Mombot for me? Would you make a Dadbot for me? This became the inspiration for HereAfter AI, a web app created by Vlahos that lets you “preserve meaningful memories about your life and interactively share them with the people you love,” according to its website.
HereAfter works much like Dadbot did: Users can record memories of their lives on audio by way of question prompts like, “What was your favorite band when you were younger?” Once the memories are collected, they’re uploaded into the software, which creates a conversational AI chatbot of the user. It’s a kind of virtual doppelganger that can answer questions and tell stories about their life through text, audio, and even pictures when they’re dead.
It’s not exactly like having a conversation with them—and it’s not supposed to be either. The chatbot allows you to access the memories and stories of your dead loved ones. You can’t, say, ask it what it thinks of a movie that just got released or make up a new bedtime story to tell you at night. But it can talk to you about the things the person recorded when they were alive.
HereAfter AI is just one business in a burgeoning industry of grieving technology that promises to digitally extend the lives of our dead loved ones—for a nominal fee. StoryFile, a similar AI-driven startup, takes it a step further, allowing you to talk to a video of someone who has passed away. The company made headlines in the summer after Marina Smith, an 87-year-old grandmother and Holocaust activist, was able to “talk to guests at her funeral” through AI.
People could ask her questions and she would answer them as if she were there, talking in her normal cadence and recalling memories about her life like any grandparent might do. The only difference was that, as much as she was there on screen talking lucidly about her past, she also had just been cremated and her remains sat in an urn.
What was actually doing the speaking were pre-recorded videos of her speaking. The AI was simply programmed to respond to questions with the right video. The effect was nothing less than spine-tingling.
“Her digital version shocked mourners,” Stephen Smith, Marina’s son and the founder of StoryFile, told The Daily Telegraph. It’s an unsurprising reaction. A beloved grandmother and activist who has helped record and educate people about the Holocaust had seemingly emerged from beyond the veil, if only for a moment, to speak to her loved ones after her death. It’d be a shock to anyone, let alone loved ones in the throes of mourning Marina’s death.
While StoryFile and HereAfter AI are relatively new players in the burgeoning business of grief tech, the concept of using AI to try and “bring back” dead loved ones isn’t new—and there are plenty of other examples throughout the past few years. In 2020, a man who was grieving over the death of his girlfriend eight years before created a chatbot of her using old text messages the two had exchanged. In 2018, a man whose father passed away created a dadbot of his own so his children could know what their grandfather was like. In 2017, a woman created a simulation of her friend who was killed after he was hit by a car. It’s even been the plot of an episode Black Mirror dubbed “Be Right Back” where a woman is able to communicate with her dead fiancé via a chatbot and, eventually, an android.
“It’s a very utilitarian way of thinking about other human beings to say that ‘I’m going to have a chatbot of my grandma so I can talk to her every day and have a relationship with her.’ But, you can’t,” Irina Raicu, the director of internet ethics at Santa Clara University, told The Daily Beast. “You’ve made essentially a deep fake of your loved one.”
Framing this experience as a “conversation” with a chatbot is a problem for Raicu. You can’t really have a conversation with a bot after all. It’s not a person. It’s lines of code. Saying that, it adds fuel to the fire of profound misunderstandings society at large has about AI. It's this distinction that Raicu and others feel might get lost once a person starts talking with a chatbot of the deceased. The problem is gnawing more and more at society as chatbots begin to become more and more sophisticated—blurring the line between reality and simulation drastically enough that some are even questioning whether or not these bots are sentient.
And when you tack on the trauma of loss—a moment in a person's life when they are most raw and vulnerable—these interactions could be a recipe for disaster.
“To me, it’s absolutely ridiculous,” Alan Wolfelt, a grief counselor and director of the Center for Loss and Life Transition, told The Daily Beast. “We already live in a mourning-avoidant culture, These technologies would be something I would add to as a contributor to complicating people's ability to mourn.”
The problem, in Wolfelt’s view, is that it could disrupt the most crucial part of grief and mourning: acknowledging the death. Instead of confronting the fact that your loved one is dead and gone forever, you might use an AI version of them in order to escape the basics of reality. This can hinder an otherwise healthy grieving and mourning process at best and become a dangerous delusion at worst.
Not all grief experts, however, agree with Wolfelt. Natalia Skritskaya, a research scientist at the Center for Prolonged Grief at Columbia University, works primarily with those with cases of chronic mourning. This occurs when the death of a loved one hits a person so hard their life becomes “like Groundhog’s Day,” she told The Daily Beast.
Skritskaya does concede there are risks. Giving a person who is in the throes of prolonged grief an AI chatbot of their loved one, she said, could be like handing an alcoholic a bottle of whiskey.
“What I see with bereaved people, especially those with prolonged grief, is that they tend to compartmentalize and try to ignore the fact that the person died or ignore the parts of reality that remind them of their absence,” Skritskaya said. “When they use the chatbot, they could be tricking themselves into thinking the person is still here. Some minds are even better than others at creating these alternative realities, and I think technology helps us do that more and more.”
But she stressed that a lot of this depends on the individual. No two grieving processes are alike after all. While one person might view something like an AI chatbot of their deceased loved one as an opportunity to remember them and connect with memories, another might use it as a way to easily escape reality.
“If you try to keep somebody alive who’s dead, you become dead to yourself and people around you,” Wolfelt explained. “You’ve got to let somebody be dead so you can live.”
If you wanted to, you could ask Ruth Bader Ginsburg a question right now.
Despite having passed away in 2020, the Supreme Court Justice lives on in a way through an AI chatbot developed by AI21 Labs dubbed Ask Ruth Bader Ginsburg, which has been trained on a dataset made of nearly three decades of her legal writing. While ostensibly similar, there’s a major difference between HereAfter AI and the RBG-bot: the late Supreme Court Justice didn’t know about this before she died.
“Ruth Bader Ginsburg didn’t consent and yet, there’s an AI of her,” Raicu said. “I think consent is a huge issue. And I think because it’s such a new thing, most of the people we’re now seeing turned into AI did not consent.”
The issue of consent is just one of the many prickly ethical questions at the underbelly of the chatbot tech trend. When these bots are infused with the memories and stories of actual people, it a veritable Gordian knot of ethical issues pop up.
For one, while these AI chatbots offered by HereAfter AI and StoryFile are fairly limited to more scripted conversations (it can’t answer a question it doesn’t already have an answer to), they open the door to the potential of generative answers like that of the RBG-bot or the recent simulated conversation between an AI Steve Jobs and AI Joe Rogan.
It’s not difficult to imagine a future where these chatbots are weaponized by bad actors to cause those who have passed to say things they never would say in real life. Raicu believes this could be especially problematic when it comes to issues like Holocaust and other genocide denial.
Vlahos said that there aren’t any plans currently for this type of generative chatbot experience. Like Raicu, he doesn’t want to see people “put words in other people’s mouths.”
“We don’t want an AI to be inventing memories. We don’t want them inventing things that your relative has potentially said,” he said. Vlahos adds that while the tech has advanced, it’s still imperfect—and could result in embarrassing or harmful moments where the AI says something the person never would have said while they were alive.
Still, Vlahos said that the goal for HereAfter AI is to eventually get to the point where it can answer a much deeper well of questions than it is currently capable of. He wants answers to be able to get a bit more specific and refined “so it’s not just ‘tell me a story about your teenage years,’ but like, ‘Tell me a story about something embarrassing that happened to you in 10th grade.’”
That’s about improving the natural language capabilities of the AI “so our avatars really get what people want to talk about,” he said.
Because at the end of the day, what HereAfter AI and its competitors are doing is selling a product. They want to offer up the best experience they can for people to help cope with some of the worst moments of their lives. What these businesses are doing for grieving and loss is what Uber did for taxis and buses: disrupting, innovating, and making things a whole lot more complicated.
“It’s the commodification of the dead,” Alexis Elder, an AI ethicist at the University of Minnesota-Duluth, told The Daily Beast. She believes that there’s an issue with the idea that you can package and sell a person’s memories and personality like this, it “encourages us to see people as equivalent to their outputs.”
She explains that people might get the sense that sure, grandpa died. But at least you get to keep chatting with his personality, which is what really matters. In reality, that’s not the case. A person is so much more than the sum of their conversations, or their stories, or the legal advice they can give you from beyond the grave, Elder said. A person is real. And you can’t just package and sell it like it was a toy.
To the credit of their makers, that’s not the point of HereAfter AI or even Dadbot.
“It doesn't make me miss him any less,” Vlahos explained. For him, it’s more about helping keep his memories of his father alive as time passes—a distinction that often gets loss when the topic is sensationalized and boiled down to “This is the AI version of your dead relative!”
“That’s the more prosaic, sci-fi way to look at it,” he said. “I think a more important way of looking at it is that it's in the continuum of memory technologies: writing, photography, audio recordings. It's using AI to better preserve and share memories.”
Anyone who has lost a loved one can tell you that pain, grief, and mourning are things that never leave you. There are easy moments and there are hard ones. There are the times where there’s nothing you wouldn’t give to hear your grandmother’s voice again, or to get a dumb text from your husband, or to joke around with your best friend like you used to. And for so many people, they can’t. The dead are dead. The living have to deal with it—but it doesn’t make it any easier.
Whether it be an AI chatbot of your loved one, or it’s a simulated podcast with Steve Jobs, the trend of infusing technology with grief strikes a very human sentiment. In times when we’re most lost and there’s nothing more we want than the kind word, advice, or a joke from someone you love, we’re willing to do anything to bring them back—even if that means bringing the dead back to life.
This story is part of a series on the innovation of death—how research and technology is changing the way we put the deceased to rest, how we grieve, and how we perceive death moving forward. Read the other stories here: