Tech executives are not generally known for their modest ambitions. Googleâs Sergey Brin, for instance, has expressed this vision for his company: âWe want Google to be the third half of your brain.â Ray Kurzweil, a futurist and inventor who is a director of engineering at Google, anticipates everything from personal flying vehicles to the uploading of human minds to computers within a few decades.
Itâs tempting to dismiss such claims as overheated hype meant to project an aura of brilliant innovation rather than to predict the future accurately. Zooming out from Google to technological predictions in general, a whole graveyard of unrealized projects and dreams comes into focus.
Harvardâs Steven Pinker is an eloquent skeptic not only of a technological singularityâa scenario in which computer intelligence eclipses our ownâbut also of a broader genus of wishful thinking: âThe fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesâall staples of futuristic fantasies when I was a child that have never arrived.â
Then again, self-driving cars, real-time language translation software, and speech recognition programs are only some of the many technologies that probably appeared outlandish a generation ago and have now been essentially realized. Because examples exist in both directions, any projected technological advance can be framed as either a soon-to-be-perfected marvel or a soon-to-be-forgotten fantasy.
This is part of what makes computer science professor Pedro Domingosâs new book, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, so interesting. Itâs an impressive and wide-ranging work that covers everything from the history of machine learning to the latest technical advances in the field. Heâs equally comfortable discussing the philosophy of David Hume and the intricacies of Markov chaining and Bayesian statistics. But the book is not simply an overview, itâs also an argument for the following hypothesis:
All knowledgeâpast, present, and futureâcan be derived from data by a single, universal learning algorithm.
Domingos is not talking about creating ârevolutionaryâ and âdisruptiveâ new apps for efficiently ordering pizza or rapidly locating purveyors of craft beer. If his master algorithm is discovered, the hyperbolic vocabulary of tech-industry cheerleading would actually become justified. He predicts that this algorithm would a) Cure cancer b) Eliminate all jobs, freeing everyone to enjoy a life of leisure and making employment just another vestige of humanityâs primitive past, and c) Invent everything that can be invented.
Whether this is attainable is an open question. Domingos clearly thinks it is both possible and imminent, but heâs refreshingly undogmatic in his belief. He admits that the Master Algorithm may belong in the same chimerical category as the philosopherâs stone and the perpetual motion machine, inventions often dreamed of but never realized. Yet even if the Master Algorithm itself is not found, the quest to discover it would be worthwhile as an intellectual exerciseâteaching machines to learn requires scientists to be very explicit about how learning worksâand would yield many valuable practical implementations.
Domingos devotes a great deal of space and ingenuity to explaining the intricacies of the five major intellectual âtribesâ of machine learning: the Symbolists, the Connectionists, the Evolutionaries, the Bayesians, and the Analogizers. Each school of thought has a âmaster algorithmâ of its own, but the ultimate Master Algorithm would combine elements of all five approaches, thus eliminating the drawbacks of each.
Symbolists program machines to learn by using a process called inverse deduction and encoding ideas from formal logic. This approach creates algorithms that are good at reasoning about mathematical universals, but less effective when it comes to probabilistic thinking. Bayesian algorithms, by contrast, are good at modeling uncertainty and making probabilistic inferences. Connectionists essentially attempt to reverse-engineer the brain, creating neural networks with connections of variable strength that change as a result of feedback loops.
Evolutionaries see natural selection as the master algorithm and use genetic programming to mate and evolve computer programs that become increasingly âfitâ for a given task, such as determining whether an email is spam. Analogizers recognize similarities between types of objects and are useful for everything from face recognition (think of how Facebook ârecognizesâ the friends you tag in a photo) to book recommendations.
Domingos uses strategies and features from all five approaches to design his own candidate for the Master Algorithm. He calls the program Alchemy to remind himself and others that machine learning is still closer to alchemy than chemistry on a spectrum of scientific progress. Alchemy has already learned more than 1 million patterns by extracting facts from the Web. These patterns are semantic networks of linked concepts, such as planets, stars, Earth, and sun. It discovered the concept of a âplanetâ on its own, and learned that planets orbit stars and that the Earth orbits the Sun.
Alchemy is not yet the omnipotent program that will cure cancer, but Domingos establishes a lucid conceptual roadmap for how to design such a machine. One passage describes how a Master Algorithm could print out a customized drug to kill any particular cancer based on an overarching and constantly evolving model of living cells, patient histories, and experimental data from the biomedical literature. Extraordinary as this seems, he makes it sound less like science fiction than a glimpse into the nature of medical care in the near future.
Occasionally he overestimates the accessibility of the subject to non-experts. Sentences like this are not exactly transparent: âThe unified learner weâve arrived at uses MLNs as the representation, posterior probability as the evaluation function, and genetic search coupled with gradient ascent as the optimizer.â But given the technical complexity of the material, most of the book is remarkably clear and comprehensible.
Domingos doesnât follow Stephen Hawking and other scientists down the rabbit hole of envisioning scenarios in which sufficiently advanced computers acquire autonomous desires and opt to enslave humanity. His reason for not worrying about this possibility is a truism: âUnlike humans, computers donât have a will of their own ⌠Even an infinitely powerful computer would still be only an extension of our will and nothing to fear.â This is less comforting than it might be. It not inherently implausible to think that consciousness might be either a necessary condition or an inevitable byproduct of a degree of complexity sufficient to exhibit human-level intelligence.
He concedes that certain dangers exist, but he thinks these stem from human psychology rather than the malevolence of machines. âAny sufficiently advanced AI is indistinguishable from God,â he writes. This places control in the hands of the priesthood of scientists who are programming this aspiring deity and deciding to what ends its powers are used. Domingosâs book is a rare chance to glimpse the inner workings of this priesthood as they seek to create something greater than themselves.