Tuesday, November 14, 2017

Techno-hype part 1

Don't, don't, don't, don't believe the hype!
Public Enemy

New technologies are routinely over-hyped because people under-estimate the gap between a technology that works and a technology that is in everyday use by normal people.

You have probably figured out that I'm skeptical of the hype surrounding blockchain technology. Despite incident-free years spent routinely driving in company with Waymo's self-driving cars, I'm also skeptical of the self-driving car hype. Below the fold, an explanation.

Clearly, self-driving cars driven by a trained self-driving car driver in Bay Area traffic work fine:
We've known for several years now that Waymo's (previously Google's) cars can handle most road conditions without a safety driver intervening. Last year, the company reported that its cars could go about 5,000 miles on California roads, on average, between human interventions.
Crashes per 100M miles
Waymo's cars are much safer than almost all human drivers:
Waymo has logged over two million miles on U.S. streets and has only had fault in one accident, making its cars by far the lowest at-fault rate of any driver class on the road— about 10 times lower than our safest demographic of human drivers (60–69 year-olds) and 40 times lower than new drivers, not to mention the obvious benefits gained from eliminating drunk drivers.

However, Waymo’s vehicles have a knack for getting hit by human drivers. When we look at total accidents (at fault and not), the Waymo accident rate is higher than the accident rate of most experienced drivers ... Most of these accidents are fender-benders caused by humans, with no fatalities or serious injuries. The leading theory is that Waymo’s vehicles adhere to the letter of traffic law, leading them to brake for things they are legally supposed to brake for (e.g., pedestrians approaching crosswalks). Since human drivers are not used to this lawful behavior, it leads to a higher rate of rear-end collisions (where the human driver is at-fault).
Clearly, this is a technology that works. I would love it if my grand-children never had to learn to drive, but even a decade from now I think they will still need to.

But, as Google realized some time ago, just being safer on average than most humans almost all the time is not enough for mass public deployment of self-driving cars. Back in June, John Markoff wrote:
Three years ago, Google’s self-driving car project abruptly shifted from designing a vehicle that would drive autonomously most of the time while occasionally requiring human oversight, to a slow-speed robot without a brake pedal, accelerator or steering wheel. In other words, human driving was no longer permitted.

The company made the decision after giving self-driving cars to Google employees for their work commutes and recording what the passengers did while the autonomous system did the driving. In-car cameras recorded employees climbing into the back seat, climbing out of an open car window, and even smooching while the car was in motion, according to two former Google engineers.

“We saw stuff that made us a little nervous,” Chris Urmson, a roboticist who was then head of the project, said at the time. He later mentioned in a blog post that the company had spotted a number of “silly” actions, including the driver turning around while the car was moving.

Johnny Luu, a spokesman for Google’s self-driving car effort, now called Waymo, disputed the accounts that went beyond what Mr. Urmson described, but said behavior like an employee’s rummaging in the back seat for his laptop while the car was moving and other “egregious” acts contributed to shutting down the experiment.
Gareth Corfield at The Register adds:
Google binned its self-driving cars' "take over now, human!" feature because test drivers kept dozing off behind the wheel instead of watching the road, according to reports.

"What we found was pretty scary," Google Waymo's boss John Krafcik told Reuters reporters during a recent media tour of a Waymo testing facility. "It's hard to take over because they have lost contextual awareness." ...

Since then, said Reuters, Google Waymo has focused on technology that does not require human intervention.
Timothy B. Lee at Ars Technica writes:
Waymo cars are designed to never have anyone touch the steering wheel or pedals. So the cars have a greatly simplified four-button user interface for passengers to use. There are buttons to call Waymo customer support, lock and unlock the car, pull over and stop the car, and start a ride.
But, during a recent show-and-tell with reporters, they weren't allowed to press the "pull over" button:
a Waymo spokesman tells Ars that the "pull over" button does work. However, the event had a tight schedule, and it would have slowed things down too much to let reporters push it.
Google was right to identify the "hand-off" problem as essentially insoluble, because the human driver would have lost "situational awareness".

Jean-Louis Gassée has an appropriately skeptical take on the technology, based on interviews with Chris Urmson:
Google’s Director of Self-Driving Cars from 2013 to late 2016 (he had joined the team in 2009). In a SXSW talk in early 2016, Urmson gives a sobering yet helpful vision of the project’s future, summarized by Lee Gomesin an IEEE Spectrum article [as always, edits and emphasis mine]:

“Not only might it take much longer to arrive than the company has ever indicated — as long as 30 years, said Urmson — but the early commercial versions might well be limited to certain geographies and weather conditions. Self-driving cars are much easier to engineer for sunny weather and wide-open roads, and Urmson suggested the cars might be sold for those markets first.”
But the problem is actually much worse than either Google or Urmson say. Suppose, for the sake of argument, that self-driving cars three times as good as Waymo's are in wide use by normal people. A normal person would encounter a hand-off once in 15,000 miles of driving, or less than once a year. Driving would be something they'd be asked to do maybe 50 times in their life.

Even if, when the hand-off happened, the human was not "climbing into the back seat, climbing out of an open car window, and even smooching" and had full "situational awareness", they would be faced with a situation too complex for the car's software. How likely is it that they would have the skills needed to cope, when the last time they did any driving was over a year ago, and on average they've only driven 25 times in their life? Current testing of self-driving cars hands-off to drivers with more than a decade of driving experience, well over 100,000 miles of it. It bears no relationship to the hand-off problem with a mass deployment of self-driving technology.

Remember the crash of AF447?
the aircraft crashed after temporary inconsistencies between the airspeed measurements – likely due to the aircraft's pitot tubes being obstructed by ice crystals – caused the autopilot to disconnect, after which the crew reacted incorrectly and ultimately caused the aircraft to enter an aerodynamic stall, from which it did not recover.
This was a hand-off to a crew that was highly trained, but had never before encountered a hand-off during cruise. What this means is that unrestricted mass deployment of self-driving cars requires Level 5 autonomy:
Level 5 _ Full Automation

System capability: The driverless car can operate on any road and in any conditions a human driver could negotiate. • Driver involvement: Entering a destination.
Note that Waymo is just starting to work with Level 4 cars (the link is to a fascinating piece by Alexis C. Madrigal on Waymo's simulation and testing program). There are many other difficulties on the way to mass deployment, outlined by Timothy B. Lee at Ars Technica. Although Waymo is actually testing Level 4 cars in the benign environment of Phoenix, AZ:
Waymo, the autonomous car company from Google’s parent company Alphabet, has started testing a fleet of self-driving vehicles without any backup drivers on public roads, its chief executive officer said Tuesday. The tests, which will include passengers within the next few months, mark an important milestone that brings autonomous vehicle technology closer to operating without any human intervention.
But the real difficulty is this. The closer the technology gets to Level 5, the worse the hand-off problem gets, because the human has less experience. Incremental progress in deployments doesn't make this problem go away. Self-driving taxis in restricted urban areas maybe in the next five years; a replacement for the family car, don't hold your breath. My grand-children will still need to learn to drive.

1 comment:

David. said...

Cecilia Kang's Where Self-Driving Cars Go To Learn looks at the free-for-all testing environment in Arizona:

"Over the past two years, Arizona deliberately cultivated a rules-free environment for driverless cars, unlike dozens of other states that have enacted autonomous vehicle regulations over safety, taxes and insurance.

Arizona took its anything-goes approach while federal regulators delayed formulating an overarching set of self-driving car standards, leaving a gap for states. The federal government is only now poised to create its first law for autonomous vehicles; the law, which echoes Arizona’s stance, would let hundreds of thousands of them be deployed within a few years and would restrict states from putting up hurdles for the industry."

What could possibly go wrong?