Tech

How to Make Sure Your Robot Doesn't Become a Nazi

FUTUROLOGY

Microsoft's "millennial chatbot" Tay made news for the wrong reasons this week—becoming a Nazi. It didn't have to end this way.

articles/2016/03/27/how-to-make-sure-your-robot-doesn-t-become-a-nazi/160326-collins-nazi-chatbots-tease_idwmsf
Photo Illustration by The Daily Beast

On Wednesday, when Microsoft had a much rosier view of humanity than it does now, the software giant released a “Millennial chatbot” to Twitter named Tay. She was supposed to mimic 18-to-24-year-olds, learn from her interactions, and develop a personality like her peers over time.

This went exactly how you would’ve expected it to.

Like most 19-year-olds on Twitter for 24 hours with no supervision, Tay had become a white supremacist Holocaust denier who believes that “Ted Cruz is the Cuban Hitler.”

ADVERTISEMENT

Microsoft had to take the thing behind the server racks and shoot it Thursday morning. Tay is in Robot Hell now, with a Teddy Ruxpin that had so much vinegar poured on him that he became extremely racist.

Burn in Hell, Tay.

But it didn’t need to go this way. Bot experts and bot ethicists (yes, they are a thing) believe that, had Microsoft done its due diligence, this never, ever should’ve happened.

“I think this is just bad parenting. I’d call in bot protective services and put it in a foster family,” David Lublin told The Daily Beast. “There are plenty of people out there thinking about the ethics of bot-making and it doesn’t seem like any of them were consulted by Microsoft.”

Tay, in other words, was never told that just one robot cigarette can lead to a robot heroin addiction, and it cost her her stupid robot life.

Lublin would know. He created a suite of Twitter bots around one big idea—the TV Comment Bot. His robot was originally “an art installation called TV Helper which lets the viewer change the genre of whatever video feed is being watched.”

It works by using a detection algorithm to identify an object in a screenshot from a live TV show, runs that object through a thesaurus, and then places that word into a larger script. The news, for example, could become a western!

What really ends up happening, though, is total anarchy.

Take this screenshot from March 14. In it, Lance Bass is angrily eating a taco on Meredith Viera’s daytime talk show.

The bot, instead, saw this: “Last time, 14 cinemas pissed right into my mouth.”

The world is incredible sometimes.

So Lublin very much knows the perils of building a bot that interacts with touchy subjects. Here’s one firewall he’s instituted: When there’s recently been a terror event, TV Comment Bot turns off all captions of news coverage and just prints screenshots—which, when following TV Comment Bot all day, somehow lends even more gravity to the situation.

It’s a little artful, even. Even the accidentally funny robot has some tact.

The same could not be said for our dearly departed Tay. And that’s why Lublin sees such malfeasance in putting a bot that went from zero to the Holocaust “was made up [clapping emoji]” in less than 24 hours.

There are simply ways to hedge against that sort of behavior, and it really is like actual parenting.

“For starters, if you are going to make a bot that mimics an 18-24-year-old, you should start by giving it all the information they would have learned up to that point in life. This includes everything you learned in high school civics, history class, and health education, not just stuff about Taylor Swift,” said Lublin.

And when Tay was unsure? If she’s supposed to be a person, she could’ve done what every living American with a phone who is not named Donald Trump would do when unsure about facts.

She could’ve simply Googled it. Or Bing’d it, if she wanted to be a total sellout.

“Tay appeared to be able to learn only in a vacuum with no way to confirm whether or not a fact coming in was valid or false by consulting a reliable source,” said Lublin.

Lublin wants to stress, however: There’s a reason TV Comment Bot isn’t an AI—and doesn’t interact with the Twitter world around him.

“To be fair, fear of trolls is one reason I’ve yet to spend any time working adding interaction to any of my own bots,” he said. “This is not an easy problem.”

It’s not an uncommon one, either. On Thursday, Anthony Garvan wanted to let Microsoft know the same thing happened to him. Last year, he made a web game that challenged users to see if they were talking to a human on the other end or a robot. The machine did the same kind of learning Tay did from its users, too.

Then he posted it to Reddit, and I really don’t think I need to tell you what happened next.

“After the excitement died down, I was testing it myself, basking in my victory,” Garvan recalled in a blog post. “Here’s how that went down.”

Garvan wrote, “Hi!”

In return, his bot wrote the n-word.

Just the n-word. Nothing else.

Garvan’s conclusion? “I believe that Microsoft, and the rest of the machine learning community, has become so swept up in the power and magic of data that they forget that data still comes from the deeply flawed world we live in.”

So here’s the real question: Is the new Turing Test—the one used to determine if a robot is distinguishable from a human—about to become the 24-hour Trolling Test?

“I don’t think we’ve even come close to seeing a bot that truly passes the Turing Test, but the 24-hour troll test is definitely an indicator of an important skill that any true AI needs to learn,” Lublin said.

He then brought up Joseph Weizenbaum, the creator of the first chatbot, Eliza, whom he thinks was onto something in his MIT lab in 1967.

“He believed that his creation was proof that chat-bots were incapable of being more than a computer—that without the context of the human experience and emotion, there was no way for a computer to do anything more than temporarily convince us that they were anything more than a machine,” he said. “That’s still very relevant today.”

If anything, Tay’s experience can teach us a little bit more about ourselves: With very little publicity or attention, every racist or weirdo on Twitter found a robot and turned it into a baby David Duke in less than a day.

So what does that say about real, actual kids who are on the Web every day, without supervision, throughout their entire adolescence?

“This is a sped-up version of how human children can be indoctrinated towards racism, sexism and hate,” said Lublin. “It isn’t just a bot problem.”

Later on, Lublin sent me a deleted screenshot from the Comment Bot, which he now moderates “like a small child using the net.” The image is a newscaster standing in front of a graphic that reads “East Village Explosion: 1 Year Later.”

The caption is this: “The candy bar vending machine has therefore been slow.”

At least it wasn’t racist.

Got a tip? Send it to The Daily Beast here.