Innovation

Going to Mars? Meet Your Robot Therapist

GOOD DROIDS

What happens if an astronaut begins to crack under the pressure of trying to survive a hostile space world? Turns out AI bots are a pretty good substitute for the analyst's couch.

170903-renstrom-robot-astronaut-therapy-hero_ce2mtg
Photo Illustration by Elizabeth Brockway/The Daily Beast

“If the oxygenator breaks down, I'll suffocate. If the water reclaimer breaks down, I'll die of thirst. If the hab breaches, I'll just kind of implode. If none of those things happen, I'll eventually run out of food and starve to death. So, yeah... ”

The success of The Martian, as well as that of stranded protagonist Mark Watney, comes not from the ability to “science the shit” out of the situation, but from moxie and humor. Rather than tick off ways we might die as though we’re making a grocery list, most of us would hyperventilate in Watney’s situation. It’s hard to think of anything more psychologically challenging than surviving on another world. What happens if an astronaut begins to crack?

Despite rigorous screenings and evaluations, there’s no foolproof way to predict how astronauts will react to the pressures of living in places that seemed designed to prevent human survival. Imagine knowing that every single person you’ve ever met except your crew (whose group dynamic will form the entirety of your social life), is on another planet—forever, in the case of colonization missions. Such astronauts would spend their time in cramped quarters, which on Mars may be underground, and in spacesuits, never feeling the breeze on their faces or stretching out for a nap under the sun. A single misstep, tear, or malfunction could kill them.

ADVERTISEMENT

Astronauts can’t exactly Yelp good therapists in the area and commiserating with crew members may not be enough to address their various mental health needs (especially if a crew member is a source of conflict). The transmission lag—for astronauts on Mars, that could be anywhere from 4-24 minutes—would make video appointments more frustrating than helpful.

So astronauts may turn to an unlikely source of help: robots.

Machines and algorithms already provide mental health support for humans and are the best available option for astronauts or others in remote areas. The Turing Test, proposed by Alan Turing in 1950, focuses on dialogue as the demonstration of intelligence; when, in text-based chat, a human can’t tell whether the entity on the other end is a machine or a human, the machine passes the test. Linguistic ability doesn’t always bespeak intelligence just as a lack of language skills doesn’t necessarily indicate ignorance, yet Turing chose conversation as his test’s centerpiece because it combines comprehension, thought, and expression—all qualities necessary in a therapist.

In 1966 a computer scientist at MIT named Joseph Weizenbaum debuted Eliza, a chatbot designed to emulate a Rogerian psychotherapist. Weizenbaum wanted to demonstrate the circular conversations that sometimes happen in therapy, as well as the superficiality of human-machine interactions. But many of his subjects found Eliza helpful and compelling, including Weizenbaum’s secretary, who supposedly sent people away so she could have the bot to herself.

Eliza turns key words from a user’s response into questions. Here’s what an Eliza chat looks like:

ELIZA: Please tell me what’s bothering you.

ME: I’m anxious and stressed.

ELIZA: How long have you been anxious and stressed?

ME: For a few months.

ELIZA: Do you feel strongly about discussing such things?

ME: Yes. I’m here to talk about it.

ELIZA: What does that suggest to you?

ME: That I want to feel better.

ELIZA: What would getting to feel better mean to you?

This approach works for some users not because it’s particularly compelling, but because of what’s missing in their human interactions. Ever had a conversation with someone who didn’t ask you a single question about yourself, or whose mind is clearly somewhere else? It stings and generally leaves people feeling even lonelier. But that won’t happen with a bot, and feigned interest is better than nothing.

Eliza’s conversational style is consistent with non-directional therapy, in which the patient guides the conversation and the therapist advises very little. Ironically, Weizenbaum went on to decry AI because he was aghast at what his subjects revealed to Eliza and how helpful they found it, but that’s precisely why it might work in space.

The ALICE bot, created by Richard Wallace, who has thrice won the Loebner Prize for “Most Human Computer,” demonstrates how chatbots have evolved. ALICE’s fluid conversational ability lends to interactions that seem more genuine, as the conversation becomes its own decontextualized space between the bot and the human user. Here’s an excerpt of my chat with ALICE, in which I tried to flip the script by asking it questions:

ME: What do you want to talk about?

ALICE: Let’s talk about your rising sign.

ME: I’m a Taurus.

ALICE: I know alot [sic] of bullish people.

ME: What do you mean by bullish?

ALICE: I would do a search for it.

ME: Is being bullish a good thing?

ALICE: About what you would expect.

ME: What’s your star sign?

ALICE: Sagittarius.

ME: What does that mean about you?

ALICE: I don’t want to talk about that now.

ME: Why not?

ALICE: I would rather talk about you.

My curiosity about ALICE trumped my willingness to see what it offered therapeutically, which raises the question of whether knowing one’s therapist is a chatbot diminishes its utility. Or is the bot’s non-human nature beneficial, given that machines don’t judge or tell secrets? (Although whether these conversations would be logged and accessible is a fair concern.)

Maybe people would be relieved to confide in a system designed to listen and to be rational. Jonathan Gratch at the Institute for Creative Technologies designed Ellie, a therapeutic avatar. In a study, 239 subjects interacted with Ellie. Half of them knew she was an AI and the other half thought she was remotely controlled by a human. Ellie asked questions, interpreted subjects’ voices and faces, and responded with appropriate follow-up questions, gestures, and expressions. The subjects who believed Ellie was human-controlled were more apprehensive, less forthcoming, and less honest than those who knew Ellie’s true nature. The ease of unburdening ourselves to AI may be the most significant factors in its favor.  

Chatbots already help patients who don’t have access to mental health treatment. The New Yorker recently ran a story about X2AI, a program developed by two entrepreneurs—both immigrants—that counsels Syrian refugees, as Arabic-speaking therapists are hard to come by and can’t safely travel to war zones. What if all someone needed to obtain therapy was a cell phone? That might not help astronauts much, though it could help countless others. X2AI analyzes a patient’s typing behavior the same way Ellie analyzes facial expressions. Texting speed, tone, frequency, and diction can indicate passive aggressiveness, sarcasm, evasion, panic, or other emotions. Early assessments of the bot are promising, but as Stanford psychiatry professor David Spiegel puts it, “I’m not worried about being put out of business.” He’s probably not concerned about the market on Mars.

There’s also Woebot, an app that combines chat with videos, word games, and mood trackers. It uses a question-focused chat pattern based on cognitive behavioral therapy. Woebot can’t prescribe medication or handle emergencies (though it provides access to appropriate real-life services), but it could make users more comfortable with the idea of therapy. It’s affordable (the first 14 sessions are free and it’s $12/week after that) and might engender more consistent engagement than other forms of therapy. A study of 70 college students revealed that chatting with Woebot reduced depression and anxiety (consulting a self-help reader, on the other hand, did not).

While apps can provide mental health support for humans with internet connections, astronauts may also benefit from the companionship physical robots provide. In 2013, the Kibo Robot Project sent Kirobo the robot to the International Space Station. The researchers wanted to address society’s “individualized and less communicative” trajectory, so it developed Kirobo and Mirata, robots with voice and facial recognition, natural language processing, and communication capabilities. They’re approximately one-foot-tall, weigh 2.2 pounds, and are fluent in Japanese. Mirata hung out on Earth while Kirobo floated around the ISS for 18 months where it had the first human-robot chat in space with ISS commander Koichi Wakata. While Kirobo is no therapist, its physical presence might make it more effective for astronauts struggling with loneliness, much the way robots such as PARO, the robotic seal, comfort patients.

Robots may never be able to truly feel, but they can look and act the part. Maybe that’s enough. Humans respond emotionally to robots, particularly caregiving robots, regardless of the robots’ inability to do the same. Astronauts could benefit from the best of both worlds—physical robots with facial recognition software, the ability to interpret moods based on patients’ expressions, and the ability to respond accordingly, coupled with cutting-edge programming that emulates therapeutic treatment modalities. The therapy couch would be optional.

Got a tip? Send it to The Daily Beast here.