The Moral Crumple Zone: Respected AV Thought Leader Dr. Philip Koopman Testifies in Washington D.C.

With automotive manufacturers marching toward more highly automated vehicles, the elephant in the room keeps getting bigger and bigger. Are those on the road today safe? Will they be safe in the future? One man trying to get industry and lawmakers to listen to pleas that today’s self-driving software is by no means safe is Dr. Philip Koopman. He’s also trying to redefine tort law to address the future of mobility.

The Moral Crumple Zone

Dr. Koopman is a professor at Carnegie Mellon University. He has been working in self-driving car safety for over 25 years. He was selected by the National Safety Council (NSC) to join its inaugural Mobility Safety Advisory Group, which advises NSC on strategies and policies for safe mobility. Many will likely already recognize Dr. Koopman’s thought leadership on the topics of ADAS and autonomy as he has appeared on podcasts, webinars, and at industry events as a speaker and panelist.

In collaboration with a lawyer, he has proposed an update to tort law, an area that covers most civil lawsuits, including relief for the wrongful acts of others. It also deters others from committing harmful acts. Today, tort law protects manufacturers and places blame on drivers who they say should have “taken the wheel.” A scenario Dr. Koopman calls putting humans in the “moral crumple zone.”

The term, moral crumple zone, was first coined by anthropologist Madeline Clare Elish in her academic paper, Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction. In her abstract, she defines it as a situation where a human operator of a highly complex and automated system becomes simply a component that bears the brunt of the moral and legal responsibilities when the overall system malfunctions. Today, Elish is head of responsible AI at Google.

Dr. Koopman contends that self-driving systems should be held to the same standards as human drivers, meaning manufacturers should be accountable when an AV system commits a tort, causing property damage, injury, or death.

Dr. Philip Koopman, Associate Professor of Electrical and Computer Engineering at Carnegie Mellon University.

Safe Harbor

In his proposal, he suggested lawmakers institute a “duty of care” for computer drivers equivalent to what human drivers have toward other road users. He asks lawmakers to establish a clear way to determine if the computer driver or human driver has the duty of care. “In brief, when the computer driver is steering, the computer driver has the duty of care,” he said. “The computer can return the duty of care to the human driver if the human driver ignores driver monitor warnings or the computer requests a transfer back to the human. In both cases, there is a 10-second safe harbor for the human to regain control before the duty of care transfers back to the human.”

Dr. Koopman recently spent a morning testifying on self-driving car safety during a U.S. House legislative hearing. The purpose of the hearing was for both sides – proponents of a bill sponsored by Rep. Bob Latta (R-OH) and proponents for a different bill sponsored by Rep. Debbie Dingell (D-MI) – to get their points out on the table for discussion, Dr. Koopman explained in an interview after returning from the capital.

For his part, Dr. Koopman submitted that the race against other countries toward autonomous driving should be about safe and reliable automated driving. He argued that any other focus, including an emphasis on the deployment of robotaxis and robotrucks to spur economic growth, obscures the fundamental issue that the industry’s aggressive deployment of immature and sometimes even experimental technology is outstripping the ability of both industry and regulators to ensure safety.

Safety Over Sales

The focus of his presentation to the committee was on prioritizing safety and reliability over sales. He argued that current regulations lack sufficient limits and enforcement on safety, allowing manufacturers to prioritize their interests over the public’s safety. He contended that the U.S. federal government should require manufacturers to adopt industry consensus standardized safety practices and be transparent about crash data and incidents. He also argued that partially automated systems should be included in regulations and are not actually safety features.

“Companies have shown they will be as opaque as possible about reliability and safety, as well as evade accountability whenever possible,” he said during his testimony. “They are losing the public’s trust. If the government does not find a way to incentivize [U.S.] companies to achieve reliability and safety, foreign companies that produce reliable, safe vehicles will likely win in the long term.”

I asked Dr. Koopman what questions the committee had for him. “It is difficult to remember the details since it was three hours of interaction,” he said. “One that sticks is whether computer drivers are safer than human drivers.” To which, according to his written testimony, he answered, “We have one or two or three million miles of robotaxi operations now, depending on the company. At 100 million miles or more between human driver fatalities, it’s another 97 million or more miles before we might confirm computer drivers are safer – assuming there are zero fatalities before then. Nobody knows whether people or computers will be better. It is likely that a human assisted by a safety-minded computer (rather than a steering convenience feature) would be safest of all for at least the next few years and potentially much longer.”

Common Misconceptions

I also asked him what was most misunderstood by the panel. “The biggest myth remains that computer drivers will automatically reduce the U.S. road fatality rate,” he said. “This is based on the incorrect impression that 94 percent of crashes are caused by human error (a misinterpretation of a U.S. government study) and implies that computers will therefore be almost 20 times safer because they don’t make human errors.”

Dr. Koopman addressed this in the hearing, later saying, “The reality is that humans and computers both make driving mistakes. While the companies have an impressive million or so miles of autonomous driving on public roads, we will need a hundred million miles or more to see if the predictions of reduced fatality rates come true.” He was also able to remind lawmakers of the proposed framework established in December 2020 for automated driving safety. “U.S. DOT just needs a Congressional mandate to act on that and move forward,” he said.

Regarding the outcome of the hearing, Dr. Koopman explained there was a set of promises to collaborate on a bipartisan bill. “I’m certainly happy to contribute,” he said. “We’ll see how it turns out.”

The entire hearing, Self-Driving Vehicle Legislative Framework: Enhancing Safety, Improving Lives and Mobility, and Beating China, can be found on the YouTube channel of the Committee on Energy and Commerce.