Science

No, You Won’t Recognize the Robot Revolution Like an Apocalyptic Movie

SMARTER AND SMARTER

Facebook’s robots may have started communicating in their own language, but it’s hardly time to start fearing an artificial intelligence takeover.

170801-Baer-fb-robots-tease_e3hmkn
Illustration by Sarah Rogers/The Daily Beast

Recently, Facebook’s Artificial Intelligence Lab found its robots had moved on to a new language. Presumably pursuing the negotiation instructions the developers had given (described in a mid-July Facebook blog post), the robots had broken English words into a new communications structure and syntax, and developers were in the dark.

They stopped the runaway language and put the communication back on English tracks. (For those interested, the bots’ new language looked a little like someone doing long division by hand—here are some transcripts.)

The developers stopped the machine chatter for a couple reasons. It’s not because the machines were plotting to take over the world.

ADVERTISEMENT

One reason to have the bots chat in English is to interface with their “clients,” the eventual human customer. Another reason is so that developers can “eavesdrop” and improve their algorithms.

But the recent headlines around this Facebook bot language deviation are fraught with fear. They assume that the developers halted the new language because well, we can’t let AI get too fancy. Some reporting literally mentions “the singularity,” a sci-fi level term that refers to a moment when AI changes humankind irrevocably. (If you’re like me, you’re thinking, “I hope so!”)

This fear-mongering may be sexy but it’s not useful. The Facebook robots aren’t “creepy,” they’re machines doing tasks reasonably well—in this case, negotiating some basic interactions. They’re following the code that humans gave.

Some folks have rightly distinguished between artificial intelligence and the kind of calendar scheduling that our mobile assistants like Siri perform. (Siri is deliberately a servant, not an artificial intelligence). I have even urged for diverse development teams precisely because our current Siri equivalents are so mechanical and shallow, and I want to see more transformative technology.

But we’re still working on AI at a pretty basic level. The whole premise of this Facebook experiment started from Facebook’s recognition that so far, chatbots are “capable of simple tasks, such as booking a restaurant.” Here, Facebook developers strove to get one level higher: to code for bots to engage in repeated interactions bearing in mind common goals, as they traded objects like books, hats or balls.

There seems to be a camp (mostly older, White men including Musk and Kurzweil) who are afraid of AI out-pacing humans. They invoke the word, “apocalypse.” As I understand their concern, it is that robots will run laps around humans because they have so much processing power.

I’m excited about artificial intelligence and I don’t fear being replaced. I believe that that which is essentially human, cannot be out-computed.

That being said, the language deviation does prompt some neat questions. Could we have let the bots continue their language? Should we have linguists try to learn “bot,” or allow the bots to continue running if they chose a structure that was more standardized? Is this an opportunity to write “rails” by which we encourage machines (instead of humans) to write their own language (otherwise known as code), so long as they can reasonably translate it for us humans?

Harvard Law professor Larry Lessig has pled for the last 18 years for us to stop calling Internet a “wild wild west”: remember, code either runs or it doesn’t.

It has always struck me as odd—wasteful, even—that lawyers are generally estranged from the coding community. Attorneys rarely partake in the intricacies or humor that comes with getting into computer coding nerdiness. Yet lawyers are some of the best positioned to understand the mathematic precision of language, and the need for plain speech. Presumably, the developers who had given the AI instructions had to review their code and wonder, “What did I actually instruct, and where am I missing the English language requirement?”

Those of us who speak English know well the inefficiencies of language rules, and I find it hardly surprising that given the lack of instruction otherwise, the bots abandoned the burden of grammar. Sure, there is something clinical about the efficiency that robots sought. Perhaps this inspires fear, the pursuit of clinical efficiency in the face of human imperfection.

But what else is inefficient? Lazy days in bed and romance poetry? Falling in love and making art? There is much about being alive—almost all the interesting parts—that are inherently un-robotic. If efficiency were all one sought, we might as well download as many memories as possible into a computer and stop eating.

Robots have the capabilities they have been assigned. Humans keep surprising ourselves, expanding our world through creativity. The art of living is inefficient. This Facebook negotiating task was an experiment that worked pretty well, but lacked a few decisions from the outset regarding language instructions. At the end of the work day, those Facebook developers went home, kissed a family member, flipped through Netflix, made dinner, and felt all the intricacies of being an inefficient human who had just had a day at work and some things worked better than others.

* The views expressed herein are the personal views of the author and do not necessarily represent the views of the FCC or the U.S. Government, for whom she works.

Got a tip? Send it to The Daily Beast here.