Artificial intelligence has gotten pretty darn smart—at least, at certain tasks. AI has defeated world champions in chess, Go, and now poker. But can artificial intelligence actually think?
The answer is complicated, largely because intelligence is complicated. One can be book-smart, street-smart, emotionally gifted, wise, rational, or experienced; it’s rare and difficult to be intelligent in all of these ways. Intelligence has many sources and our brains don’t respond to them all the same way. Thus, the quest to develop artificial intelligence begets numerous challenges, not the least of which is what we don’t understand about human intelligence.
Still, the human brain is our best lead when it comes to creating AI. Human brains consist of billions of connected neurons that transmit information to one another and areas designated to functions such as memory, language, and thought. The human brain is dynamic, and just as we build muscle, we can enhance our cognitive abilities—we can learn. So can AI, thanks to the development of artificial neural networks (ANN), a type of machine learning algorithm in which nodes simulate neurons that compute and distribute information. AI such as AlphaGo, the program that beat the world champion at Go last year, uses ANNs not only to compute statistical probabilities and outcomes of various moves, but to adjust strategy based on what the other player does.
ADVERTISEMENT
Facebook, Amazon, Netflix, Microsoft, and Google all employ deep learning, which expands on traditional ANNs by adding layers to the information input/output. More layers allow for more representations of and links between data. This resembles human thinking—when we process input, we do so in something akin to layers. For example, when we watch a football game on television, we take in the basic information about what’s happening in a given moment, but we also take in a lot more: who’s on the field (and who’s not), what plays are being run and why, individual match-ups, how the game fits into existing data or history (does one team frequently beat the other? Is the quarterback passing for as many yards as usual?), how the refs are calling the game, and other details. In processing this information we employ memory, pattern recognition, statistical and strategic analysis, comparison, prediction, and other cognitive capabilities. Deep learning attempts to capture those layers.
You’re probably already familiar with deep learning algorithms. Have you ever wondered how Facebook knows to place on your page an ad for rain boots after you got caught in a downpour? Or how it manages to recommend a page immediately after you’ve liked a related page? Facebook’s DeepText algorithm can process thousands of posts, in dozens of different languages, each second. It can also distinguish between Purple Rain and the reason you need galoshes.
Deep learning can be used with faces, identifying family members who attended an anniversary or employees who thought they attended that rave on the down-low. These algorithms can also recognize objects in context—such a program that could identify the alphabet blocks on the living room floor, as well as the pile of kids’ books and the bouncy seat. Think about the conclusions that could be drawn from that snapshot, and then used for targeted advertising, among other things.
Google uses Recurrent Neural Networks (RNNs) to facilitate image recognition and language translation. This enables Google Translate to go beyond a typical one-to-one conversion by allowing the program to make connections between languages it wasn’t specifically programmed to understand. Even if Google Translate isn’t specifically coded for translating Icelandic into Vietnamese, it can do so by finding commonalities in the two tongues and then developing its own language which functions as an interlingua, enabling the translation.
Machine thinking has been tied to language ever since Alan Turing’s seminal 1950 publication “Computing Machinery and Intelligence.” This paper described the Turing Test—a measure of whether a machine can think. In the Turing Test, a human engages in a text-based chat with an entity it can’t see. If that entity is a computer program and it can make the human believe he’s talking to another human, it has passed the test. Iterations of the Turing Test, such as the Loebner Prize, still exist, though it’s become clear that just because a program can communicate like a human (complete with typos, an abundance of exclamation points, swear words, and slang) doesn’t mean it’s actually thinking. A 1960s Rogerian computer therapist program called ELIZA duped participants into believing they were chatting with an actual therapist, perhaps because it asked questions and unlike some human conversation partners, appeared as though it’s listening. ELIZA harvests key words from a user’s response and turns them into question, or simply says, “tell me more.” While some argue that ELIZA passed the Turing Test, it’s evident from talking with ELIZA (you can try it yourself here) and similar chatbots that language processing and thinking are two entirely different abilities.
But what about IBM’s Watson, which thrashed the top two human contestants in Jeopardy? Watson’s dominance relies on access to massive and instantly accessible amounts of information, as well as its computation of answers’ probable correctness. In the game, Watson received this clue: “Maurice LaMarche found his inner Orson Welles to voice this rodent whose simple goal was to take over the world.” Watson’s possible answers and probabilities were as follows:
Pinky and the Brain: 63 percent
Ed Wood: 10 percent
capybara: 10 percent
Googling “Maurice LaMarche” quickly confirms that he voiced Pinky. But the clue is tricky because it contains a number of key terms: LaMarche, voiceover, rodent, and world domination. “Orson Welles” functions as a red herring—yes, LaMarche supplied his trademark Orson Welles voice for Vincent D’Onofrio’s character in Ed Wood, but that line of thought has nothing to do with a rodent. Similarly, a capybara is a South American rodent (the largest in the world, which perhaps Watson connected with the “take over the world” part of the clue), but the animal has no connection to LaMarche or to voiceovers unless LaMarche does a mean capybara impression. A human brain probably wouldn’t conflate concepts as Watson does here; indeed, Ken Jennings buzzed in with the right answer.
Still, Watson’s capabilities and applications continue to grow—it’s now working on cancer. By uploading case histories, diagnostic information, treatment protocols, and other data, Watson can work alongside human doctors to help identify cancer and determine personalized treatment plans. “Project Lucy” focuses Watson’s supercomputing powers on helping Africa meet farming, economic, and social challenges. Watson can prove itself intelligent in discrete realms of knowledge, but not across the board.
Perhaps the major limitation of AI can be captured by a single letter: G. While we have AI, we don’t have AGI—artificial general intelligence (sometimes referred to as “strong” or “full” AI). The difference is that AI can excel at a single task or game, but it can’t extrapolate strategies or techniques and apply them to other scenarios or domains—you could probably beat AlphaGo at Tic Tac Toe. This limitation parallels human skills of critical thinking or synthesis—we can apply knowledge about a specific historical movement to a new fashion trend or use effective marketing techniques in a conversation with a boss about a raise because we can see the overlaps. AI can’t, for now.
Some believe we’ll never truly have AGI; others believe it’s simply a matter of time (and money). Last year, Kimera unveiled Nigel, a program it bills as the first AGI. Since the beta hasn’t been released to the public, it’s impossible to assess those claims, but we’ll be watching closely. In the meantime, AI will keep learning just as we do: by watching YouTube videos and by reading books. Whether that’s comforting or frightening is another question.