banner
News center
Our quality system is continuously evolving and improving to meet demands.

If You Want To Understand Robotaxis, Beware San Francisco Safety Fog

Apr 26, 2024

City skyline and Golden Gate bridge after sunset (San Francisco, California). Downtown and low fog ... [+] on the background.

In discussions about robotaxis, not all types of “safety” are alike. Nor are all “failures.” For several months, San Francisco public figures and the mass media have been throwing around these words while missing key distinctions. The nuances matter immensely. Let’s take a look at the three key forms of what I call “robotaxi fog.”

Fog 1: Not All Robotaxis Are Alike

Cruise and Waymo are currently operating hundreds of self-driving vehicles on San Francisco streets. When an incident occurs with a robotaxi, a typical news story opens with the name of the company operating the offending vehicle along with basic info. The wrongs are stated and explored, as well as other unacceptable behavior. The typical article concludes with a discussion about the problems of “robotaxis” as a whole. Wait a minute! Are both these companies running exactly the same software and hardware? Of course not. So, if a Cruise incident occurs, it’s only about Cruise. And a Waymo incident is only about Waymo. Over and over, lazy journalism results in painting both companies with the same broad brush. If both companies were bumbling around at the same level, this wouldn't matter much. But the reality is that Cruise incidents by far outnumber Waymo incidents. If Cruise can’t bring down their incident count, their business is in jeopardy.

Fog 2: Not All Failures Are Alike

In what seems to be a recurring bad dream, Cruise once again made the news for the wrong reason by being in the wrong place at the wrong time. On the evening of August 11th in San Francisco’s North Beach, the Outside Lands music festival was going full tilt. Enough of the 100,000+ fans were on their phones to cause cellular connectivity to crash. Just outside the festival site about ten Cruise robotaxis simultaneously stopped in the traffic lanes of Grant Avenue, flashing their hazard lights and blocking regular traffic. They were probably there to take patrons home but when their cellular tether disappeared, the cars lost contact with the Cruise operations center. Cruise said their cars are programmed to cease operating if there is a loss of connectivity. The usual strident FAUX??? safety voices went nuts! The typical gripe went something like this: “How could Cruise possibly build their system with a reliance on cellular, which everyone knows can drop out?” For many, there was the immediate leap to say this need for connectivity is yet more proof of how the Cruise fleet is a threat to San Franciscans.

But here’s the thing: this incident was not a “safety” incident, it was an “operational” incident. From what is known, no safety issues were associated with this incident. In losing cellular connectivity, the cars were programmed to attain a “minimal risk condition” which in this case was to stop. From what I know of robotaxi development in general, the vehicles were fulfilling their safety goals during the entire event. What was not fulfilled was their operational objectives. If you’re Cruise, you must achieve safety objectives 100% of the time and you aim to achieve operational objectives at close to 100%. Clearly, Cruise has more work to do.

These types of operational failures can occur in this early stage of robotaxi field deployment. As far as I know, there is no published data on these occurrences. They can be troublesome in a traffic sense, but they are not safety incidents.

Once again, the descriptors of robotaxi incidents can be mired in obfuscation. News of a robotaxi failure has no meaning unless the nature of the failure is clear, i.e. safety or operations.

Fog 3: There’s More Than One Kind of Safety

Informed dialogue about robotaxis is lacking when it comes to the difference between “traffic safety” (crashes occurring between road users, i.e. cars, bikes, pedestrians) and “public safety,” which for drivers (human or robotic) is about properly maneuvering near active fire and police scenes. These are very different aspects of driving. The kinetic energy in a vehicle crash can instantly injure or kill human beings. By contrast, disruptions caused by dumb-robotaxi-driving (driving through yellow tape, driving over a fire hose, etc.) at an incident disrupts the efforts of our dedicated public safety professionals. Certainly, it is possible that a disabled fire hose could result in loss of life. However, the instantaneous harm that occurs in a vehicle crash is vastly different from the potential harm of disrupting the duties of first responders.

At the opening of the TRB Automated Road Transportation Symposium in San Francisco last month, Jeffrey Tumlin, Executive Director of the San Francisco Municipal Transportation Agency, launched into an energetic rant about the harm being done to the city by robotaxis, railing against the well documented issues robotaxis have had with fire and police scenes. For conference attendees, Tumlin set a sober tone, implying that deployed robotaxis are dangerous. He didn't, however, choose to speak to the traffic safety record of these automated vehicles.

In an apparent response to Tumlin during the second day of the conference, Dr. Trent Victor, Waymo’s Director of Safety Research and Best Practices, noted that in two million miles of commercial robotaxi operations there has been only one minor property damage claim and zero bodily injury events.

For their part, Cruise published a blog post early this year in which Louise Zhang, VP for Safety & Systems, reported that in over one million miles of robotaxi operations their vehicles had 54% fewer collisions overall, 92% fewer collisions as the primary contributor, and 73% fewer collisions with meaningful risk of injury, relative to human driver traffic data.

By no means should disruptive robotaxi behaviors at active incident sites be condoned. The tech must be smart enough to “get it right” or “get out of town.” The harm caused in hindering first responders is real. My point here is to highlight the critical importance of speaking clearly and precisely when discussing robotaxi incidents. Traffic safety incidents are very different from public safety incidents.

Greg Dieterich, Cruise’s general manager in San Francisco, noted earlier this month that “During the course of more than 3 million miles of fully autonomous driving in San Francisco we’ve seen an enormous number of emergency vehicles – more than 168,000 interactions just in the first 7 months of this year alone.” When robotaxis properly interact with first responders, its not likely to make the news.

Seeing Clearly Now? Soon?

Whether it be Cruise or Waymo, or nascent players Motional and Zoox, here’s what to watch for: how often do specific types of incidents repeat? Although we continue to see robotaxi’s stop in traffic again and again, it’s likely that the more recent instances are very different relative to the incidents that occurred several months ago. The tech teams at these companies aim to learn from each incident and adjust the system’s intelligence so that it doesn't occur again. At least, this would be the optimum state. To the extent the robotaxi tech teams are successful in this endeavor, we’ll be seeing the incident rates steadily dropping.

I don't expect the mass media to change their ways, but I’m hoping this article helps robotaxi observers read the news more accurately. Or at least, know when the news doesn't provide enough information to really understand what happened!

Fog 1: Not All Robotaxis Are AlikeFog 2: Not All Failures Are AlikeFog 3: There’s More Than One Kind of Safety Seeing Clearly Now? Soon?