![]() |
| Source |
Three years ago, Google’s self-driving car project abruptly shifted from designing a vehicle that would drive autonomously most of the time while occasionally requiring human oversight, to a slow-speed robot without a brake pedal, accelerator or steering wheel. In other words, human driving was no longer permitted.Gareth Corfield at The Register added:
The company made the decision after giving self-driving cars to Google employees for their work commutes and recording what the passengers did while the autonomous system did the driving. In-car cameras recorded employees climbing into the back seat, climbing out of an open car window, and even smooching while the car was in motion, according to two former Google engineers.
Google binned its self-driving cars' "take over now, human!" feature because test drivers kept dozing off behind the wheel instead of watching the road, according to reports.Follow me below the fold for a wonderful example of Tesla's handoff problem, and a discussion of the difference between Tesla's and Waymo's approaches to self-driving.
"What we found was pretty scary," Google Waymo's boss John Krafcik told Reuters reporters during a recent media tour of a Waymo testing facility. "It's hard to take over because they have lost contextual awareness."
I wrote about this handoff problem in 2017's Techno-hype part 1. I did a thought experiment, imagining mass-market cars 3 times better than Waymo's at the time:
A normal person would encounter a hand-off once in 15,000 miles of driving, or less than once a year. Driving would be something they'd be asked to do maybe 50 times in their life.I concluded:
Even if, when the hand-off happened, the human ... had full "situational awareness", they would be faced with a situation too complex for the car's software. How likely is it that they would have the skills needed to cope, when the last time they did any driving was over a year ago, and on average they've only driven 25 times in their life? Current testing of self-driving cars hands-off to drivers with more than a decade of driving experience, well over 100,000 miles of it. It bears no relationship to the hand-off problem with a mass deployment of self-driving technology.
But the real difficulty is this. The closer the technology gets to Level 5, the worse the hand-off problem gets, because the human has less experience. Incremental progress in deployments doesn't make this problem go away.Raffi Krikorian:
used to run the self-driving-car division at Uber, trying to build a future in which technology protects us from accidents. I had thought about edge cases, failure modes, the brittleness hiding behind smooth performance. My team trained human drivers on when and how to intervene if a self-driving car made a mistake. In the two years I ran the division, we had no injuries in our early pilot programs.He has an article in the current Atlantic entitled My Tesla Was Driving Itself Perfectly—Until It Crashed with the sub-head:
The danger of almost-perfect techAs an enthusiast for slef-driving technology, Krikorian used it:
With my own Tesla, I started out using Full Self-Driving as the default setting only on highways. That’s where it makes sense: You have clear lane markers and predictable traffic patterns. Then, one day, I tried it on a local road, and it worked well enough to become a habit.But, after three years:
My memory is hazy, and some of it comes from one of my sons, who watched the whole thing unfold from the back seat. The car was making a turn. Something felt off—the steering wheel jerked one way, then the other, and the car decelerated in a way I didn’t expect. I turned the wheel to take over. I don’t know exactly what the system was doing, or why. I only know that somewhere in those seconds, we ended up colliding with a wall.He didn't have "situational awareness", even though he was an experienced driver aware of the handoff problem. He sums up the current problem, with drivers like him:
Full Self-Driving works almost all of the time—Tesla’s fleet of cars with the technology logs millions of miles between serious incidents, by the company’s count. And that’s the problem: We are asking humans to supervise systems designed to make supervision feel pointless. A machine that constantly fails keeps you sharp. A machine that works perfectly needs no oversight. But a machine that works almost perfectly? That’s where the danger lies. After a few hours of flawless performance, research shows, drivers are prone to start overtrusting self-driving systems. After a month of using adaptive cruise control, drivers were more than six times as likely to look at their phone, according to one study from the Insurance Institute for Highway Safety.Imagine this problem compounded by handing off to a driver who hadn't driven in a year.
Google was building Level 4 robotaxis. Their conservative approach was to eliminate the handoff problem completely. Waymos operate on carefully mapped routes after much practice, and are equipped with a diverse set of sensors. Just as everywhere along their flight path, airliners have a designated diversion airport, Waymos know a safe place to stop and ask for help from remote humans. They don't drive the cars, they just advise the car as to how to solve the problem. This can, as I have seen a couple of times, cause frustration among other road users, but it is safe.
Tesla, on the other hand, had a Level 2 driver assist system with a limited set of sensors, which depended on handing off to the driver in case of confusion. They consistenly marketed it as "Full Self-Driving" with exaggerated claims about its capabilities, and sold it to normal, untrained drivers. They could not, and could not afford to, implement Google's approach. Why not?
- Scale: Tesla has 1.1M FSD customers, where six months ago Waymo had about 2K cars in service. To support them, Waymo has about 70 remote operators on duty. Of course, FSD is used much less intensively, lets guess only 5% as much. Even if, optimistically, Tesla's technology generated as few remote requests as Waymo's they would need almost 2,000 remote operators on duty.
- Technical: First, Tesla markets FSD as usable anywhere, even if their terms of service disagree. So they lack the detailed maps Waymos use when they need to find a safe place. Second, Tesla has far fewer sensors, so has much less information on which to base the need for and choice of a safe place.
- Marketing: There are two problems. First, telling the public that FSD will sometimes need to stop and ask for help goes against the idea that it is "Full Self Driving". Second, everyone can see that a Waymo is driving itself and can set their expectations to match. No-one can tell that a Tesla is using Fake Self Driving. So were Teslas stopping unexpectedly, even if it wasn't using Fake Self Driving, the assumption would be that the technology had failed.
Update 4th April
![]() |
| Source |
Waymo is now providing 500,000 paid robotaxi rides every week across 10 U.S. cities, the company shared in a post on X this week. The eye-popping figure is reflective of the Alphabet-owned company’s accelerated commercial expansion. But it’s Waymo’s rate of growth in ridership and markets that offers a more compelling story.The fleet hasn't grown with the rides, showing increased utilization and thus improved economics:
In less than two years, the company’s average weekly paid robotaxi trips have grown tenfold, from 50,000 per week in May 2024 to 500,000 per week today. Over that same two-year timespan, Waymo has expanded within its initial markets of Phoenix, San Francisco, and Los Angeles — and beyond them to Austin, Atlanta, Miami, Dallas, Houston, San Antonio, and Orlando. Those seven cities in the Sun Belt were all added in just the past year.
Waymo’s robotaxi fleet has also grown, although the company has guarded those numbers and rarely provides updates. Data provided in December 2025 to the National Highway Traffic Safety Administration (NHTSA) shows the company had 3,067 robotaxis equipped with its 5th generation self-driving system. The company still uses that “over 3,000” fleet number today. That could soon change with the introduction of its 6th generation self-driving system, which will debut on the Zeekr minivan, known as Ojai, and the Hyundai Ioniq 5.


10 comments:
This made me think of the way we use AI today for software development. It's almost perfect, but the handoff when it fails is going to cause all sort of grief.
You have a lot of untrained empowered users, building software with not much more than vibes to take them out of the weeds when they inevitable get into them.
It's going to be interesting (in the ancient Chinese curse sense of the word).
Business Insider covers this topic in their YouTube video Why Fully Self-Driving Cars Are Almost Impossible>.
We are rapidly approaching a point in time when a human’s primary task is to quickly gain context from an AI and execute a decision that the AI then follows through on. Within the framework of driving, the ability to quickly gain context is temporally restricted by a pending accident.
As AI capabilities increase, the need for humans to make faster and more efficient decisions will increase. Eventually humans will work so closely alongside their AI companions, that the context the AI provides to their human to make a decision will be read and acted on faster than what other humans can even process the question. Those watching this interaction will see flashes appear on a screen, a human manically pressing buttons, the screen changing, and the cycle starting again. It will be like watching the very best humans play Tetris. Where as onlookers have no concept of what is even happening, the very best Tetris players are able to smoothly handle insane speeds. Likewise, speed chess players are able to rapidly make moves against opponents, to the point where onlookers feel they are barely playing a game with rules.
Our current self driving cars have not learned their human’s processing preferences to be efficient enough to act in the temporal timeframe needed. We need to be able to quickly gain context and make a decision, but our current systems are not robust enough to provide the context in an efficient enough way for us to instinctually understand.
Tim, it would have been better if you had read the post and also First We Change How People Behave. Your scenario is a recipe for disaster, see the
737 MAX and Air France 447.
In order for humans to respond as rapidly as you want "faster than what other humans can even process the question", we need both skill and a lead-up to the event, or skill and a simplified playing field with definite rules.
I watch small animals run around, and they do so with speed that humans can't do. We can't do it because our large bodies don't signal that fast. When we do move as fast as them it's because we're running out a pre-planned script, or are flailing about. Any pre-planned script we can run, a computer can do faster. And while flailing about sometimes succeeds, it usually doesn't.
There's a reason blitz chess players, when playing stronger or equal opponents, start slowing down after the early game. Chess and tetris both have studies of winning strategies. You can use winning strategies when driving, too, but the variants (e.g. getting side-swiped during hail) are rare enough that people need time to think when they happen in order to respond. Most of us do not have experience on what to do with a novel driving event. Some of us occasionally remember a relevant point learned in driver's ed. But usually we can't respond other than braking or trying to get out of the way.
Though there are a finite number of novel events. It's easier to simulate them, and then train a computer to handle them, than to get an AI to give a person who wasn't paying attention exactly the information they need to deal with it right now.
I'd prefer either fully autonomous driving. No decision driving (e.g. trams, all cars centrally controlled). Or the person always being in charge, with the computer only intervening to stop a foreseeable accident. Anything in between these options, where humans and computers switch out decision making, is a recipe for disaster.
Well. Not only was I influenced by the post (which I knew when writing my comment), but I accidentally copied your "recipe for disaster" without conscious awareness. What it is to be human. Or AI. :D
I sincerely appreciate your reply. Please accept my apologies for filling your comment section with frivolous drivel. My own recent experimentation into agentic system interfaces provided myself with a bias, along with skimming the article, caused me to miss your entire thesis. The more of your site I read, the more I see how brilliant you are and how you are raising legitimate points that need to be heard. Upon reflection, I should have engaged with your article and understood your point of view before raising my voice.
I'm sorry it took me a while to figure out where the misunderstanding came from. There are two kinds of handoffs:
- A positive handoff, when the computer decides something is wrong and transfers control to the human.
- A negative handoff when the human decides something is wrong and takes control from the computer.
The common factor is that in both cases the computer is wrong, so improving its ability to communicate information to the human will make the problem worse because the information will be wrong.
Krikorian's crash was a negative handoff, as were the 737 MAX and Air France crashes. In each case by the time the human took control it was too late. It is inevitable that this will happen. A decade ago Paul Vixie wrote:
"Simply put, if you give a human brain the option to perform other tasks than the one at hand, it will do so. No law, no amount of training, and no insistence by the manufacturer of an automobile will alter this fact. It's human nature, immalleable. So until and unless Tesla can robustly and credibly promise an autopilot that will imagine every threat a human could imagine, and can use the same level of caution as the best human driver would use, then the world will be better off without this feature."
When Waymos see a problem their priority is to maintain safety. When Tesla's Fake Self Driving sees a problem their priority is to ensure the human gets the blame.
Interesting points I wouldn't have thought of.
Sometimes wouldn't the human be the error in a negative hand-off? Regardless, responding to Tim's argument, most times a hand-off would occur I assume the situation would be so novel that an AI would have low probability of predicting how the human would respond. Even a personalized AI would have low probability of predicting the human. So it couldn't give the human the data that would encourage its (the AI) preferred outcome. Unless said data was "brace yourself for impact".
So Waymo claims 500K rides per week. Lets assume that "over 3000 cars" means 3,500. That is 143 rides/week/car, or 20 rides/day/car on average. If we assume rides primarily happen during 18 hours/day, that is a bit over 1 ride/hour/car during working hours; already an impressive level of utilization.
Post a Comment