In most moral reasoning classes, you’re likely to come across the trolley dilemma. There are several versions of the philosophical problem, but essentially it’s this: there’s a runaway train careening down the tracks heading straight toward five innocent people. You’re standing near a lever and have the opportunity to divert the train, only there’s a sixth person on the divergent route who will surely be struck. What do you do?
The prevailing thought is to sacrifice the few to save the many, but what if that lone person on the rerouted track is your son? What would you do then?
Now, what if we ask a robot to make the same decisions—what will it choose?
ADVERTISEMENT
It’s a question that’s been cropping up more frequently as machines begin to tackle elaborate human tasks like driving. “It’s the most complex daily thing the average person does,” says Professor Raj Rajkumar of Carnegie Mellon University’s robotics department. According to Rajkumar, self-driving vehicles will be a multi-trillion dollar industry by the end of this decade—that’s a lot of robot trolleys zooming toward those five people.
But the answer to the AI version of the trolley quandary—like all good ethics questions—is a trick: A robot would ideally never find itself into this situation, or so I learned while completing my mission specialist coursework with Uber’s Advanced Technologies Group to pilot their autonomous vehicles. Essentially, Uber University, where I majored in trialing early self-driving software for an eventual (read: hopeful) future when the robot in question would be intelligent enough to attempt an unaccompanied self-drive.
I was the first civilian invited to participate in the program, and I failed. Spectacularly. Not because I wasn’t a good driver (you should see my parallel parking skills)—I aced the philosophy seminar, too—but because the rigor of the program has changed dramatically since it relaunched following the incident that the ATG team refers to in hushed tones as “Tempe.”
On March 18, 2018, in Tempe, Arizona, Uber experienced its own real-life variation of the trolley problem, as one of its Volvo XC90s slid toward disaster. It was the perfect storm: the vehicle’s operator was overly confident in the technology, paying attention to her cellphone instead of the road; the self-driving software—still in a nascent phase—failed to positively identify an obstruction in the car’s direct path; and a pedestrian attempted to walk a bicycle across a wide, freeway-adjacent boulevard at night in a sign-posted area that forbade jaywalking. No one was mentally there—human or robot—to pull the lever, and the jaywalker was killed, instantly shuttering Uber’s burgeoning autonomous vehicle program.
It would be a full nine months before the program would relaunch in Dec. 2018 as a pared-back version of its former self, focused mostly at a test track adjacent to the ATG headquarters in Pittsburgh. (San Francisco, Dallas and Toronto have small satellite research teams too, but Tempe remains permanently closed.) And from the extended break, a new program philosophy was born prioritizing the caliber of data collected (instead of the volume). New levels of safety were defined, and coursework was redeveloped to “more closely mirror aviation certification,” explains Nick Wedge, head of learning and development at Uber ATG. In short, the previously established security standards were raised, which included additional reductions to system latency—the micro-delays in autonomous vehicles’ sensor-to-software-to-hardware communication—to ensure that “Tempe” won’t happen again.
The comparison with aviation is apt, as the use of autonomous technology on commercial aircrafts is already commonplace—pilots manhandle taxiing, takeoffs and landings, but once in the air, self-piloting technology kicks in, monitored by the flight deck. And as the current mode of transportation with the best safety record—the software-related Boeing 737 Max 8 disaster notwithstanding—aviation industry standards of risk assessment management have now become the much-needed template on which the emerging self-driving industry laid its safety framework. The sky is also, of course, a controlled environment as every pilot goes through intensive training; one limiting factor for the proliferation of self-driving vehicles on the ground is quite simply the unpredictability of other humans. It’s what your father tells you when he teaching you how to drive in a Walmart parking lot: “It’s not you I’m worried about—it’s the other drivers!”
Never did I imagine as a teenager so desperate to get my learner’s permit that I’d one day be testing the work-in-progress technology that could, in the not-too-far future, make a license obsolete. After a preliminary few days of onboarding, briefings, and lectures on Uber’s corporate structure, trainees (everyone but me is an ATG employee) partake in a full week of intensive manual operations, getting acquainted with the Volvo XC90 and testing its capabilities on the cordoned-off track. Emergency maneuvers are practiced (like checking the screeching limits of anti-lock breaks), as are rigorous route navigation, parking, and reversing exercises—a keyhole car rotation (pulling a car out of a parking spot and threading it back into the same space through an almost impossibly tight loop) may rank as one of the most unpleasant things I’ve done in recent memory.
“A lot of what you learn in the manual portion of the mission specialist training program actually eventually translates into managing a self-driving vehicle,” explains Wedge. “Things like limit points and occlusions are really important”—terms I never heard when I was learning how to drive when I was younger, but are essential factors when a driver is determining risk. The need for all mission specialists to have total hand-operated mastery of the vehicle is crucial because they are the currently the intermediary between the robot and its environment as software continues to adapt to road conditions in this liminal phase. The level and rigor of the manual portion of the training program also highlights the glaring irony of modern road travel: If regular drivers partook in a manual training program of this caliber, we’d already be one giant leap closer to autonomy due to the massive potential decrease in human error. So why aren’t regular drivers trained to a higher level in the first place?
The simple answer is: The way we drive today has been deemed good enough. The 35,000 annual vehicular deaths may sound like a lot—and it’s certainly devastating for those involved—but statistically, you’d have to clock over 100,000,000 miles (that’s 390 years of continuous driving) before a fatality would occur. When you remove limiting factors like distracted driving (texting), impaired driving (inebriation), and fatigue (ATG mission specialists are only allowed to be behind the wheel for two hours; it was 12 before Tempe), the human brain is naturally equipped to observe, anticipate and react in trolley-like scenarios; self-driving robot brains are not quite there yet.
You get to flick the switch on the car’s autonomous driving mode during the third week of mission specialist training. And it’s this portion of the coursework that’s seen the most palpable paradigm shift since Tempe. While the measurement is subjective, the cars self-drive more defensively now—Uber initially placed too much confidence in the human hand as a failsafe—with enhanced detection and tracking that better distinguishes unusually shaped objects in motion like pedestrians and cyclists.
Data-grabbing methods have changed dramatically since Tempe as well. Before, there was a need to demonstrate progress through what the ATG team now calls a “land grab”: an objective way to provide measurable updates—miles clocked—as self-driving cars buzzed along busy boulevards, attempting interactions with the real world’s constant swarm of moving bodies. Rather than practicing maneuvers, like a left turn, the prevailing thought was to give the robotic driving software, with its infinite memory capacity, a rote understanding of every street, corner and curve, like memorizing millions of phrases of a foreign language but never grasping conjugation or the fundamentals of sentence structure.
Now, the gold rush for charted miles has been replaced with a qualitative approach to learning: Teach the software all the rules and exceptions of the foreign language (like practicing every possible type of left turn on a test track) and it will know how to seamlessly generate any sentence it needs (on the road). As the software learns, it only operates within the limits of its taught environment, trialing new data points like a French student would with their teacher before getting on flight to Paris—it’s a slower burn, but Uber’s no longer trying to boil the ocean.
And that’s where the trick robot trolley question comes into play. Like humans, artificial intelligence perceives, predicts and reacts to the scenarios with which it’s faced, but the difference lies in the sophistication of application-based self-driving software. When finely tuned, its forecasting capabilities will be so precise that an automated would-be-trolley simply won’t find itself barreling toward five—or one—people.
Never mind the prevailing societal distrust we have for AIs—that they possess human decision-making skills but without the moral veil—machines can take predictive measures to eliminate the potential risk that sets a trolley dilemma into motion in the first place. “With a constant 360 view of the world around it, self-driving vehicles operate without human performance issues to make safer, more informed decisions about how to react to things going on around it,” furthers Nat Beuse, the head of safety for Uber ATG. The use of super-human technology like lidar (light detection and ranging), radar and cameras surpasses the sensory limitations that manual drivers inherently possess; essentially, there’s no such thing as a robot blindspot. This ultimately renders the trolley scenario obsolete because the dilemma itself presupposes a human failure in perception, which triggers a prediction (five or one people) and reaction (choosing a path); a heightened world view engages earlier and with more thorough reasoning to isolate negative variables before they turn into unwanted predictions.
But what happens when the software beachballs? It swirls all the time on my laptop, so what do we do if it occurs in a self-driving car? Much of the autonomous coursework pertains to what’s called “fault injection”—negotiating purposefully created glitches in the self-driving technology so that history (i.e. Tempe) doesn’t repeat itself. The time it takes to regain control of the vehicle (moving the steering wheel or tapping either the accelerator or the break returns driving capabilities to the pilot) is duly measured as sudden stops and wrenches help trainees practice winnowing down their reaction time to as close to zero seconds as possible, should any hiccups happen in real life out in the field. Honing driver alertness is a key factor in accident avoidance on the journey toward fully autonomous driving since operator complacency was one of the leading causes in the Tempe incident. “Our current technology requires highly trained mission specialists behind the wheel at all times, and only once we validate that our system fulfills our safety case will we explore fully autonomous driving,” adds Wedge. And with the new, post-Tempe safety framework borrowed from the aviation industry, every risk must be accounted for (and an assurance that the software won’t beachball) before a driverless car can take flight.
Had I graduated, I would have become a part of an elite team at Uber ATG that—until the technology reaches an acceptable threshold of usage—grapples with yet another philosophical dilemma that Wedge refers to as the paradox of distrust: “The success of the mission specialist hinges upon living in the gray space between rooting for the technology to succeed, but expecting it to fail. You’re at once its most enthusiastic champion, but also its greatest disbeliever.”
Stranger still, is the notion that the total success of the mission specialist team—achieving what’s called NVO (no vehicle operator)—means eventual redundancy; you’re essentially grooming technology to make yourself obsolete. So what’s the motivation (says the writer in the ever-dwindling field of journalism)?
The short answer is that Uber is not yet striving for a world of total autonomy—developing realms of self-driving pistes is the current goal.
Ten years ago, it was all guerrilla tactics as Uber parachuted down onto the streets and shifted transportation paradigms so dramatically that cities had no choice but to obey. Today, those yottabytes of user data have become valuable tools to the very same destinations as they isolate routes that are primed for autonomy; high-density arteries in a common operational domain where self-driving vehicles could alleviate traffic, reduce collision potential, and lower the transportation price point for the passenger.
Highways are a particularly conducive space for autonomy, as they best approximate commercial flying conditions with their long, wide lanes, no corners or pedestrians, and a single forward flow of vehicles.
While car companies like Tesla are hoping to amp up sales with their glitzy versions of “autopilot” (which, for the record, is a souped-up version of cruise control, and not proper autonomy), Uber would rather take the car out of your driveway. No dramatic decrees have been made to completely abandon your vehicle, but an ideal near-future would allow commuters to follow self-driving conduits to their jobs downtown. The dream, for now, is to use autonomy to eliminate the symptoms of an overcrowded environment like rush hour bottlenecking, while retaining manual automobiles for close-to-home tasks like backstreet navigation and grocery shopping. And when the technology is ready, only passengers would ride in the robot car, reducing the operational costs of running the vehicle (which in turn cuts fares for riders) as there would no longer be pilot monitoring navigation.
“Parking garages will soon be a vestige of the past,” notes Wedge, and the plans have been well underway for over a decade with the proliferation of the Uber ride-sharing app. But it’s the platform itself, which Uber also created, that holds the key to the future both for streamlining autonomy and for ramping up profitability—it’s the chessboard on which all eventual self-driving services will place their pieces: first cars, then freight trucks, and maybe even buses. But please, no trolleys.