By Published: Nov. 6, 2023

In August, the California Public Utilities Commission when it voted to allow two self-driving car companies, Waymo and Cruise, to commercially operate their 鈥渞obotaxis鈥 around the clock in San Francisco.

Within hours, Cruise reported at least 10 incidents where vehicles stopped short of their destination, blocking city streets. The commission demanded they recall 50% of their fleet.听

Despite these challenges, other cities 鈥 including Las Vegas, Miami, Austin and Phoenix 鈥 autonomous vehicle startups to conduct tests on public roads.听

"Self-driving car proponents see the jump from laboratories to real-world testing as a necessary step that has been a long time coming."

Self-driving car proponents see the jump from laboratories to real-world testing as a necessary step that has been a long time coming. The first autonomous vehicle was tested on the Autobahn in Germany in 1986, but the advances stalled in the 1990s due to technology limitations.听

After a 2007 Defense Department鈥檚 Advanced Research Projects Agency (DARPA) , it seemed like the era of driverless cars had finally arrived. The competition kickstarted a Silicon Valley race to develop the first commercial driverless car. Optimism abounded, with engineers, investors and automakers predicting there would be as many as 10 million self-driving cars on the road by 2020.听

鈥淭he question for the last 30 years is 鈥 how long is this going to take?鈥 said Javier von Stecher (PhDPhys鈥08), senior software engineer at Nvidia who has worked on self-driving car technology at companies including Uber and Mercedes-Benz. 鈥淚 think a lot of people were oversold on the idea that we could get this working fast. The biggest shift I鈥檝e seen over the past decade is people realizing how hard this problem really is.鈥澨

The stakes may be high, but that鈥檚 not deterring 精品SM在线影片 researchers. From creating systems and models to studying human-machine interactions, university teams are working to advance the field safely and responsibly as self-driving cars become a fixture in our society.听

Their next big question: Can we learn to trust these vehicles?

Self driving cars on the highway illustrationCruise Control

The idea behind autonomous vehicles is simple. An artificial intelligence system pulls in data from an array of sensors including radar, high-resolution cameras and GPS, and uses this data to navigate from point A to point B while avoiding obstacles and obeying traffic laws. Sounds simple? It鈥檚 not.

When a self-driving car encounters an unexpected obstacle, it makes split-second judgment calls 鈥 should it brake or swerve around it? 鈥 that develop naturally in humans but are still beyond even the most sophisticated AI systems.听

Moreover, there will always be an edge case that the AI-powered car hasn鈥檛 seen before, which means the key to safe autonomous vehicles is building systems that can correctly favor safe choices in unfamiliar situations.

Majid Zamani, associate professor and director of , studies how to create software for autonomous systems such as cars, drones and airplanes. In autonomous vehicles鈥 AI systems, data flows into the AI and helps it make decisions. But how the AI creates those decisions is a mystery. This, said Zamani, makes it difficult to trust the AI system 鈥 and yet trust is critically important in high-stakes applications like autonomous driving.

鈥淭hese are what we call safety critical applications because system failure can cause loss of life or damage to property, so it鈥檚 really important that the way those systems are making decisions is provably correct,鈥 Zamani said.听

In contrast to AI systems that use data to create models that are not intelligible to humans, Zamani advocates for a bottom -up approach where the AI鈥檚 models are derived from fundamental physical laws, such as acceleration or friction, which are well-understood and unchanging.

鈥淚f you derive a model using data, you have to be able to ensure that you can quantify how much error is in that model and the actual system that uses it,鈥 Zamani said.

Mathematically demonstrating the safety of the models used by autonomous vehicles is important for engineers and policymakers who need to guarantee safety before they鈥檙e deployed in the real world. But this raises some thorny questions: How safe is 鈥渟afe enough,鈥 and how can autonomous vehicles communicate these risks to drivers?听

Computer, Take the Wheel听

Each year, more than 40,000 Americans die in car accidents, and , about 90% of U.S. auto deaths and serious crashes are attributable to driver error. The great promise of autonomous vehicles is to make auto deaths a relic of history by eliminating human errors with computers that never get tired or distracted.听听

The NHTSA designates six levels of 鈥渁utonomy鈥 for self-driving cars, which range from Level 0 (full driver control) to Level 5 (fully autonomous). For most of us, Level 5 is what we think of when we think of self-driving cars: a vehicle so autonomous that it might not even have a steering wheel and driver鈥檚 seat because the computer handles everything. For now, this remains a distant dream, with many automakers pursuing Level 3 or 4 autonomy as stepping stones.听

鈥淢ost modern cars are Level 2, with partial autonomous driving,鈥 said Chris Heckman, associate professor and director of the Autonomous Robotics and Perception Group in 精品SM在线影片鈥檚 computer science department. 鈥淯sually that means there鈥檚 a human at the wheel, but they can relegate some functions to the car鈥檚 software such as automatic braking or adaptive cruise control.鈥

While these hybrid AI-human systems can improve safety by assisting a driver with braking, acceleration and collision avoidance, limitations remain. Several fatal accidents, for example, have resulted from drivers鈥 overreliance on autopilot, which stems from issues of human psychology and AI understanding.

Fostering Trust听

This problem is deeply familiar to Leanne Hirschfield, associate research professor at the Institute of Cognitive Science and the director of the System-Human Interaction with NIRS and EEG () Lab at 精品SM在线影片. Hirschfield鈥檚 research focuses on using brain measurements to study the ways humans interact with autonomous systems, like self-driving cars and AI systems deployed in elementary school classrooms.听

"When an autonomous vehicle can show the driver information about how it鈥檚 making decisions or its level of confidence in its decisions, the driver is better equipped to determine when they need to grab the wheel."

Trust, Hirschfield said, is defined as a willingness to be vulnerable and take on risks, and for decades the dominant engineering paradigm for autonomous systems has been focused on ways to foster total trust in autonomous systems.听

鈥淲e鈥檙e realizing that鈥檚 not always the best approach,鈥 Hirschfield said. 鈥淣ow, we鈥檙e looking at trust calibration, where users often trust the system but also have enough information to know when they shouldn鈥檛 rely on it.鈥

The key to trust calibration, she said, is transparency. When an autonomous vehicle can show the driver information about how it鈥檚 making decisions or its level of confidence in its decisions, the driver is better equipped to determine when they need to grab the wheel.听

Studying user responses is challenging in a laboratory setting, where it鈥檚 difficult to expose drivers to real risks. So Hirschfield and researchers at the U.S. Air Force Academy have been using a Tesla modified with a variety of internal sensors to study user trust in autonomous vehicles.听

鈥淧art of what we鈥檙e trying to do is measure someone鈥檚 level of trust, their workload and emotional states while they鈥檙e driving,鈥 Hirschfield said. 鈥淭hey鈥檒l have the car whipping around hills, which is how you need to study trust because it involves a sense of true risk compared to a study in a lab setting.鈥澨

Although Hirschfield said that researchers have made a lot of progress in understanding how to design autonomous vehicles to foster driver trust, there is still a lot of work to be done.

A self driving car user illustrationHuman-Centered Design听

Sidney D鈥橫ello, a professor at the Institute of Cognitive Science, studies how human-computer interactions shift the way we think and feel. For D鈥橫ello, it鈥檚 unclear whether the current crop of self-driving cars can shift to a new driver-focused paradigm from the current perfected engineering-forward approach.

鈥淚 think we need an entirely new methodology for the self-driving car context,鈥 D鈥橫ello said. 鈥淚f you really want something you can trust, then you need to design these systems with users starting from day one. But every single car company is kind of stuck in this engineering mindset from 50 years ago where they build the tech and then they present it to the user.鈥

The good news, D鈥橫ello said, is that automakers are starting to take this challenge seriously. A collaboration between Toyota and the Institute of Cognitive Science focused on designing autonomous vehicles that foster trust in the user.

鈥淭he autonomous model typically implies the AI is in the center with the human hovering around it,鈥 said D鈥橫ello. 鈥淏ut this needs to be a model with the human in the center.鈥澨

Even when users learn to trust autonomous vehicles, living with driverless cars and reconceptualizing how they relate to them is complex. But there鈥檚 a lot we can apply from research on prosthetics, said Cara Welker, assistant professor in biomechanics, robotics and systems design.

Much like autonomous vehicles analyze surroundings to make navigation and control decisions, robotic prostheses monitor a wearer鈥檚 movements to understand appropriate behavior. And just as teaching users to trust prosthetics requires strong feedback loops and predictable prosthetic behavior, teaching drivers to trust autonomous vehicles means providing drivers with information about what the AI is doing 鈥 and it requires drivers to reconceptualize vehicles as extensions of themselves.听

鈥淭here鈥檚 a difference between users being able to predict the behavior of an assistive device versus having some kind of sensory feedback,鈥 Welker said. 鈥淎nd this difference has been shown to affect whether the people think of it as 鈥榤e and my prosthesis鈥 instead of just 鈥榤e, which includes my prosthesis.鈥 And that鈥檚 incredibly important in terms of how users will trust that device.鈥澨

How, then, will drivers evolve to experience cars as extensions of themselves?听

Next Exit

In 2018, a by a self-driving Uber in Arizona, which marked the first fatality attributed to an autonomous vehicle. Although the driver pleaded guilty in the case, the question of who is responsible when autonomous vehicles kill is far from settled.听

Today, there is limited regulation dictating autonomous vehicle safety and liability. One problem is that vehicles are regulated at the federal level while drivers are regulated at the state level 鈥 a division of responsibility that doesn鈥檛 account for a future where the driver and vehicle are more closely aligned.听

Researchers and automakers have voiced frustration with existing autonomous driving regulations, agreeing that updated regulations are necessary. Ideally, regulations would ensure driver, passenger and pedestrian safety without quashing innovation. But what these policies might look like is still unclear.听

The challenge, said Heckman, is that the engineers don鈥檛 have complete control over how autonomous systems behave in every circumstance. He believes it鈥檚 critical for regulations to account for this without insisting on impossibly high safety standards.听

鈥淢any of us work in this field because automotive deaths seem avoidable and we want to build technologies that solve that problem,鈥 Heckman said. 鈥淏ut I think we hold these systems [to] too high of a standard 鈥 because yes, we want to have safe systems, but right now we have no safety frameworks, and automakers aren鈥檛 comfortable building these systems because they may be held to an extremely high liability.鈥澨

Other industries may offer a vision for how to regulate the autonomous driving industry while providing acceptable safety standards and enabling technological development, Heckman said. The aviation industry, for example, adopted rigorous engineering standards and fostered trust in engineers, pilots, passengers and policymakers.听

鈥淭here鈥檚 an engineering principle that trust is a perception of humans,鈥 Heckman said. 鈥淭rust is usually built through experience with a system, and that experience confers trust on the engineering paradigms that build safe systems.听

鈥淲ith airplanes, it took decades for us to come up with designs and engineering paradigms that we feel comfortable with. I think we鈥檒l see the same in autonomous vehicles, and regulation will follow once we鈥檝e really defined what it means for them to be trustworthy.鈥澨