Tech

The Self-Driving Car Is an Idea That Badly Needs a Tune-Up

WHERE’S YOUR BRAIN

To become autonomous, self-driving cars must become conscious, a service upgrade currently unavailable.

040322-ogasgaddam-selfdrivingcars-hero_e4aafv
Photo Illustration by Luis G. Rendon/The Daily Beast/Getty

Though carmakers eagerly tout hands-free steering and other seemingly magical “self-driving” technologies, anyone who says fully autonomous vehicles are just around the corner is mistaken. The artificial minds of self-driving cars are missing one all-important technological ingredient. Consciousness.

Many people believe that consciousness is a baffling and otherworldly phenomenon, far beyond scientific understanding or practical application. That’s an outdated view. Scientists now understand quite a bit about consciousness, including the fact that it is the key to autonomy in all living minds of greater complexity than a bumblebee. Indeed, the roughly three-billion-year journey of minds on Earth from primordial biochemistry to human civilization holds an important lesson for self-driving vehicles. The lesson is this: intelligence is easy. Autonomy is hard.

It took about 70 years of machine learning research for computers to surpass us in many of our most prized forms of intelligence, such as winning at chess, composing compelling music, and writing thoughtful essays. But three billion years of autonomy development in living minds has proven to be a far steeper mountain to climb. The difference between intelligence and autonomy is illustrated by IBM’s Watson, who trounced the greatest Jeopardy! players on the classic game show—but needed to be hard-wired to the show’s question-posing system. He was incapable of doing an autonomous task that any three-year-old could do: listening for the host to finish a question before buzzing in. In contrast, self-driving cars—whose intelligence is dwarfed by the jaw-dropping knowledge base of Watson—are far closer to achieving consciousness, because they need to be autonomous to fulfill their purpose.

ADVERTISEMENT

Yet the technology for autonomous vehicles remains stuck at the level of a protozoa mind or, in its most cutting-edge forms, the level of an ant mind. When we retrace the incremental development of thinking on Earth from the microscopic minds of archaea to the downhill-skiing, fighter-jet-piloting minds of Homo sapiens, we discover that consciousness is a specific mental innovation that emerged during the transition from invertebrates to vertebrates to tackle the challenges of real-time autonomous decision-making in a complex, unpredictable, macro-scale environment. The exact same set of challenges must be solved by self-driving vehicles.

Consider driving in India. India’s exuberantly chaotic roadways are crammed full of a kaleidoscopic hodgepodge of vehicles: colorful rickshaws (some motor-powered, some foot-powered), decorated lorries, ratty pickups, pushcarts overflowing with vegetables, bullying SUVs, roadside bicycle mechanics, darting Maruti hatchbacks, and an armada of motorbikes, all maneuvering within inches of each other. Indian drivers frequently ignore traffic signs or road lines, treating them as benign guidelines. Cars do not slow down for pedestrians and pedestrians do not wait for a break in the traffic to cross the road, resulting in an endless cross-stream of people—and the occasional street dog—flowing through non-stop vehicular traffic from every side. A tumult of street hawkers and windshield washers adds to the confusion by approaching vehicles.

Driving in many parts of the world, including India, demands that drivers pay constant attention to both sight and sound: the only way to be aware of the location of the vehicles crowded around you is to listen to the constant honking of horns to judge the location of the honker. (American honking usually means, “Get out of my way!” but Indian honking usually means, “I’m right here!”) The challenge for a self-driving car is to continuously segment environmental information coming from every direction into objects and events, identify and prioritize risks (I hear a lorry blasting her horn to my right! Is that a pedestrian crossing in front of me, or a hawker coming toward me? A langur is scurrying in front of my tires!) and act upon prioritized risks instantly, all while continuously recalculating the optimal route to the destination (that clot of traffic huddled on the corner of Crawford Market looks more daunting than that empty avenue bending toward JJ Flyover.)

To break free of human minders and a reliance on stable, familiar environments, a self-driving car must be able to evaluate and respond to multiple simultaneous context-dependent inputs (including novel and unique inputs) from different modalities (audio, visual, lidar, navigational) arising from every direction (front, back, left, right, above, below) while pursuing an objective (drive to Bazar Road for a delivery). Using current self-driving-mind architectures, this is probably an unsolvable challenge in a country with roads as “well-behaved” as in the USA. In an anything-goes nation like India, it’s utterly intractable.

Following the success of Deep Learning algorithms (which implement intelligence, not autonomy), the standard approach to designing self-driving minds is to model the vehicle’s expected environment based upon statistical regularities, such as how often red dots in the periphery are traffic lights and how quickly cars typically accelerate away after a green light. In effect, self-driving cars create a statistical model of the world in the past, then attempt to fit all future events into this historical model. This means that statistics-driven self-driving minds sometimes label unexpected but highly relevant events as noise. In the Deep Learning framework, the solution to unexpected problems (like an upside-down truck in the road) is to crash now, crunch later—crunch ever more historical data hoping to somehow model every possible eventuality that the universe might hurl onto the asphalt. But there’s a very good reason why living brains never approached the challenge of autonomous navigation this way.

Current self-driving cars fall short, and will always fall short, because they rely upon a statistical model of the world instead of naturally embracing unexpected real-time events.

To understand the role and operation of consciousness—and why it is essential for autonomy—let’s compare the unconscious mind of a fruit fly with the conscious mind of a mouse. Like self-driving cars, insects have extraordinary perceptual sensitivity and an impressive memory for the value of diverse perceptual patterns that together drive a small and highly constrained set of behaviors. I smell a scent similar to my previous experience with rotten jackfruit, therefore I will fly toward the attractive odor. Where fly minds fall short is in dealing with objects and events. They focus on perceptual patterns rather than holistic things, which means they are easily tripped up by objects with unusual patterns. A good example is the zebra: the equine’s black-and-white stripes befuddle fly minds, causing them to frequently bounce off a zebra’s hide instead of landing for a bite. This is fine for a fly, which can survive high-speed collisions due to its small size and flexible exoskeleton. But once you inhabit a large, fleshy bag of bones—or a skeleton of glass and fiberglass—avoiding collisions becomes more urgent.

A mouse enjoys a far more sophisticated behavioral repertoire than a fly, a mental upgrade resulting from an entirely new layer of thinking elements that operates on top of the “older” layer of fruit fly thinking elements. This new layer consists of highly specialized neural modules that manage complex objects and events. Mouse minds possess, for instance, a visual object recognition module, an auditory object recognition module, an olfactory object recognition module, an object valuation module, and a navigating-around-objects module. But the emergence of parallel modules that all simultaneously process different sorts of sensory inputs creates a new challenge for autonomous behavior that we might call the attention problem.

If a mouse sees nuts on the ground straight ahead, smells berries somewhere to the side, remembers that an owl lives in the area, and hears a peculiar crackling sound in the leaves that it has never heard before, which mental representation should it focus on? In a brain with dozens of specialized modules simultaneously pursuing their own objectives, what kind of global dynamics enable every module to drop what they’re doing and pay attention to the same urgent representation?

Consciousness. Consciousness automatically and efficiently determines which module’s representation is worthy of the entire mind’s attention. Just as important, consciousness also enables a complex real-time system to quickly identify and respond to unique but important opportunities and threats, such as an unfamiliar sound. This is where current self-driving cars fall short, and will always fall short, because they rely upon a statistical model of the world instead of naturally embracing unexpected real-time events. Conscious minds are excellent at recognizing when an unfamiliar situation offers a big payoff (a crowd of cricket fans suddenly pours into the street on the left, providing cover to accelerate quickly to the right) or an imminent danger (a bullock cart toting red bricks is teetering precariously on the edge of the road—avoid), and the unique physical dynamics of consciousness is the reason why.

Consider our mouse. Its visual module forms a representation of nuts, its object localization module forms a representation of an owl, its olfactory module forms a representation of berries, its auditory module forms a representation of the unknown noise. The dynamics of consciousness cause these diverse representations to compete in real-time for control of the mouse’s global attention. The strange sound wins out and the mouse’s mind devotes its resources to quickly evaluating the significance of the crackling (by recruiting the attention of the visual, olfactory, localization, and valuation modules) and generating a suitable response. Consciousness smoothly manages the global attention of a mind capable of efficient object management and module-to-module communication. In this regard, consciousness is merely another mental innovation in a very ancient line of innovations designed to manage ever-growing numbers of increasingly competent mental subsystems—and it’s not even the most recent or powerful such innovation (human language is another).

If a conscious mind spots an overturned truck in the road and has never seen such a thingamajig, the mind doesn’t disregard it as an irrelevant outlier.

But how to translate these theoretical insights into a practical design for engineering a conscious self-driving car? Boston University professor emeritus Stephen Grossberg’s unified mathematical account of consciousness, based upon 65 years of neural research, provides the blueprints for designing a truly autonomous vehicle. Grossberg’s equations for consciousness suggest that this endeavor will require radical shifts in the basic architecture of self-driving minds.

Consciousness demands multiple layers of real-time thinking, where each layer simultaneously and independently processes information and provides and receives feedback from adjacent layers. Consciousness also requires parallel object- and event-processing modules operating at the highest level that provide and receive lateral feedback from other modules in real-time. This vertebrate-mind-emulating architecture would permit parallel modules in a car mind to simultaneously process different objects and events and efficiently select which one to prioritize at any given moment, naturally handling event-dense environments (like driving through the cauldron of traffic on Mahim Causeway) and unexpected objects (such as an elephant striding by).

For example, if a conscious mind spots an overturned truck in the road and has never seen such a thingamajig, the mind doesn’t disregard it as an irrelevant outlier. Instead, the visual object recognition module broadcasts an alert to the entire mind and takes command of the mind’s dynamics by recruiting other modules to focus on the unknown whatchamacallit. Vertebrate minds are designed to do a decent job of recognizing objects from novel perspectives (also managed by the dynamics of consciousness), but even if the object is not quickly identified as a truck, the mind will still focus intensely on navigating around it (“Who knows what this weird thing will do, drive cautiously!”)—and, crucially, on learning about the truck on the fly, during a single exposure, so that next time it will be prepared for another such thingamabob.

This is the secret to autonomy in advanced minds, including any car mind hoping to pilot its way through a boisterous metropolis.

Ogi Ogas, Ph.D., was a Department of Homeland Security Fellow at Boston University and a research fellow at the Harvard Graduate School of Education. Sai Gaddam, Ph.D., was a postdoctoral fellow in the Center for Adaptive Systems at Boston University. Ogas and Gaddam are the coauthors of Journey of the Mind: How Thinking Emerged from Chaos, published by Norton.

Got a tip? Send it to The Daily Beast here.