Tuesday, November 14, 2017

Techno-hype part 1

Don't, don't, don't, don't believe the hype!
Public Enemy

New technologies are routinely over-hyped because people under-estimate the gap between a technology that works and a technology that is in everyday use by normal people.

You have probably figured out that I'm skeptical of the hype surrounding blockchain technology. Despite incident-free years spent routinely driving in company with Waymo's self-driving cars, I'm also skeptical of the self-driving car hype. Below the fold, an explanation.

Clearly, self-driving cars driven by a trained self-driving car driver in Bay Area traffic work fine:
We've known for several years now that Waymo's (previously Google's) cars can handle most road conditions without a safety driver intervening. Last year, the company reported that its cars could go about 5,000 miles on California roads, on average, between human interventions.
Crashes per 100M miles
Waymo's cars are much safer than almost all human drivers:
Waymo has logged over two million miles on U.S. streets and has only had fault in one accident, making its cars by far the lowest at-fault rate of any driver class on the road— about 10 times lower than our safest demographic of human drivers (60–69 year-olds) and 40 times lower than new drivers, not to mention the obvious benefits gained from eliminating drunk drivers.

However, Waymo’s vehicles have a knack for getting hit by human drivers. When we look at total accidents (at fault and not), the Waymo accident rate is higher than the accident rate of most experienced drivers ... Most of these accidents are fender-benders caused by humans, with no fatalities or serious injuries. The leading theory is that Waymo’s vehicles adhere to the letter of traffic law, leading them to brake for things they are legally supposed to brake for (e.g., pedestrians approaching crosswalks). Since human drivers are not used to this lawful behavior, it leads to a higher rate of rear-end collisions (where the human driver is at-fault).
Clearly, this is a technology that works. I would love it if my grand-children never had to learn to drive, but even a decade from now I think they will still need to.

But, as Google realized some time ago, just being safer on average than most humans almost all the time is not enough for mass public deployment of self-driving cars. Back in June, John Markoff wrote:
Three years ago, Google’s self-driving car project abruptly shifted from designing a vehicle that would drive autonomously most of the time while occasionally requiring human oversight, to a slow-speed robot without a brake pedal, accelerator or steering wheel. In other words, human driving was no longer permitted.

The company made the decision after giving self-driving cars to Google employees for their work commutes and recording what the passengers did while the autonomous system did the driving. In-car cameras recorded employees climbing into the back seat, climbing out of an open car window, and even smooching while the car was in motion, according to two former Google engineers.

“We saw stuff that made us a little nervous,” Chris Urmson, a roboticist who was then head of the project, said at the time. He later mentioned in a blog post that the company had spotted a number of “silly” actions, including the driver turning around while the car was moving.

Johnny Luu, a spokesman for Google’s self-driving car effort, now called Waymo, disputed the accounts that went beyond what Mr. Urmson described, but said behavior like an employee’s rummaging in the back seat for his laptop while the car was moving and other “egregious” acts contributed to shutting down the experiment.
Gareth Corfield at The Register adds:
Google binned its self-driving cars' "take over now, human!" feature because test drivers kept dozing off behind the wheel instead of watching the road, according to reports.

"What we found was pretty scary," Google Waymo's boss John Krafcik told Reuters reporters during a recent media tour of a Waymo testing facility. "It's hard to take over because they have lost contextual awareness." ...

Since then, said Reuters, Google Waymo has focused on technology that does not require human intervention.
Timothy B. Lee at Ars Technica writes:
Waymo cars are designed to never have anyone touch the steering wheel or pedals. So the cars have a greatly simplified four-button user interface for passengers to use. There are buttons to call Waymo customer support, lock and unlock the car, pull over and stop the car, and start a ride.
But, during a recent show-and-tell with reporters, they weren't allowed to press the "pull over" button:
a Waymo spokesman tells Ars that the "pull over" button does work. However, the event had a tight schedule, and it would have slowed things down too much to let reporters push it.
Google was right to identify the "hand-off" problem as essentially insoluble, because the human driver would have lost "situational awareness".

Jean-Louis Gassée has an appropriately skeptical take on the technology, based on interviews with Chris Urmson:
Google’s Director of Self-Driving Cars from 2013 to late 2016 (he had joined the team in 2009). In a SXSW talk in early 2016, Urmson gives a sobering yet helpful vision of the project’s future, summarized by Lee Gomesin an IEEE Spectrum article [as always, edits and emphasis mine]:

“Not only might it take much longer to arrive than the company has ever indicated — as long as 30 years, said Urmson — but the early commercial versions might well be limited to certain geographies and weather conditions. Self-driving cars are much easier to engineer for sunny weather and wide-open roads, and Urmson suggested the cars might be sold for those markets first.”
But the problem is actually much worse than either Google or Urmson say. Suppose, for the sake of argument, that self-driving cars three times as good as Waymo's are in wide use by normal people. A normal person would encounter a hand-off once in 15,000 miles of driving, or less than once a year. Driving would be something they'd be asked to do maybe 50 times in their life.

Even if, when the hand-off happened, the human was not "climbing into the back seat, climbing out of an open car window, and even smooching" and had full "situational awareness", they would be faced with a situation too complex for the car's software. How likely is it that they would have the skills needed to cope, when the last time they did any driving was over a year ago, and on average they've only driven 25 times in their life? Current testing of self-driving cars hands-off to drivers with more than a decade of driving experience, well over 100,000 miles of it. It bears no relationship to the hand-off problem with a mass deployment of self-driving technology.

Remember the crash of AF447?
the aircraft crashed after temporary inconsistencies between the airspeed measurements – likely due to the aircraft's pitot tubes being obstructed by ice crystals – caused the autopilot to disconnect, after which the crew reacted incorrectly and ultimately caused the aircraft to enter an aerodynamic stall, from which it did not recover.
This was a hand-off to a crew that was highly trained, but had never before encountered a hand-off during cruise. What this means is that unrestricted mass deployment of self-driving cars requires Level 5 autonomy:
Level 5 _ Full Automation

System capability: The driverless car can operate on any road and in any conditions a human driver could negotiate. • Driver involvement: Entering a destination.
Note that Waymo is just starting to work with Level 4 cars (the link is to a fascinating piece by Alexis C. Madrigal on Waymo's simulation and testing program). There are many other difficulties on the way to mass deployment, outlined by Timothy B. Lee at Ars Technica. Although Waymo is actually testing Level 4 cars in the benign environment of Phoenix, AZ:
Waymo, the autonomous car company from Google’s parent company Alphabet, has started testing a fleet of self-driving vehicles without any backup drivers on public roads, its chief executive officer said Tuesday. The tests, which will include passengers within the next few months, mark an important milestone that brings autonomous vehicle technology closer to operating without any human intervention.
But the real difficulty is this. The closer the technology gets to Level 5, the worse the hand-off problem gets, because the human has less experience. Incremental progress in deployments doesn't make this problem go away. Self-driving taxis in restricted urban areas maybe in the next five years; a replacement for the family car, don't hold your breath. My grand-children will still need to learn to drive.


David. said...

Cecilia Kang's Where Self-Driving Cars Go To Learn looks at the free-for-all testing environment in Arizona:

"Over the past two years, Arizona deliberately cultivated a rules-free environment for driverless cars, unlike dozens of other states that have enacted autonomous vehicle regulations over safety, taxes and insurance.

Arizona took its anything-goes approach while federal regulators delayed formulating an overarching set of self-driving car standards, leaving a gap for states. The federal government is only now poised to create its first law for autonomous vehicles; the law, which echoes Arizona’s stance, would let hundreds of thousands of them be deployed within a few years and would restrict states from putting up hurdles for the industry."

What could possibly go wrong?

Mike K said...

It seems to me that that there's a "good enough" solution for mass deployment before Stage 5 is in production, provided that the "pull over button" works and that in all situations where you invoke concern about a human-driver-takeover, the AI can reliably default to avoiding hitting anything while it decelerates. That is, if the AI realizes it doesn't know how to handle the situation normally, it accepts defeat and comes to stop. (That seems to be the norm during current testing, based on my read of Madrigal's Waymo article.) If that's the case, humans don't suddenly have to take over a moving vehicle that's already in a boundary situation. Instead, having stopped, the AI can then reassess (if the confounding factors have changed) or the human can slowly drive out of proximity. Or perhaps such situations become akin to a flat tire is now--some people are capable of recovering on their own, others wait for roadside assistance.

Coming to a stop on, or even alongside, a highway is far from ideal, I concede, and will lead to more rear-enders as long as humans still drive some percentage of vehicles. But rear end accidents are far less likely to cause fatalities than other types (citation needed,) so that seems like an acceptable trade-off during a transitional period.

All that said, I'm cautiously pessimistic about self-driving cars in our lifetimes. I'm more worried about bugs, outages, and hacking preventing widespread implementation.

David. said...

"how much preparation have federal transportation authorities carried out to meet the challenge of the advent of self-driving cars and trucks? Not nearly enough, according to a new 44-page report by the Government Accountability Office, a Congressional watchdog agency." reports Paul Feldman. And:

"the U.S. House of Representatives has approved a bill allowing self-driving vehicles to operate on public roadways with minimal government supervision. Similar legislation has been OK’d by a Senate committee, but is currently stalled by a handful of senators concerned about safety provisions."

David. said...

In increasing order of skepticism, we have first A Decade after DARPA: Our View on the State of the Art in Self-Driving Cars by Bryan Salesky, CEO, Argo AI (Ford's self-driving effort):

"Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art or committed to the safe deployment of the technology."

Second, After Peak Hype, Self-Driving Cars Enter the Trough of Disillusionment by Aarian Marshall at Wired using Gartner’s “hype cycle” methodology:

"Volvo’s retreat is just the latest example of a company cooling on optimistic self-driving car predictions. In 2012, Google CEO Sergey Brin said even normies would have access to autonomous vehicles in fewer than five years—nope. Those who shelled out an extra $3,000 for Tesla’s Enhanced Autopilot are no doubt disappointed by its non-appearance, nearly six months after its due date. New Ford CEO Jim Hackett recently moderated expectations for the automaker’s self-driving service, which his predecessor said in 2016 would be deployed at scale by 2021. “We are going to be in the market with products in that time frame,” he told the San Francisco Chronicle. “But the nature of the romanticism by everybody in the media about how this robot works is overextended right now.”"

And third Wired: Self Driving Car Hype Crashes Into Harsh Realities by Yves Smith at naked capitalism, which is the only piece to bring up the hand-off problem:

"The fudge is to have a human at ready to take over the car in case it asks for help.

First, as one might infer, the human who is suddenly asked to intervene is going to have to quickly asses the situation. The handoff delay means a slower response than if a human had been driving the entire time. Second, and even worse, the human suddenly asked to take control might not even see what the emergency need is. Third, the car itself might not recognize that it is about to get into trouble."

All three pieces are worth reading.

David. said...

More skepticism from Christian Wolmar:

“This is a fantasy that has not been thought through, and is being promoted by technology and auto manufacturers because tech companies have vast amounts of footloose capital they don’t know what to do with, and auto manufacturers are terrified they’re not on board with the new big thing,” he said. “So billions are being spent developing technology that nobody has asked for, that will not be practical, and that will have many damaging effects.”

He has an entire book on the topic.

David. said...

Tim Bradshaw reports:

"Autonomous vehicles are in danger of being turned into “weapons”, leading governments around the world to block cars operated by foreign companies, the head of Baidu’s self-driving car programme has warned.

Qi Lu, chief operating officer at the Chinese internet group, said security concerns could become a problem for global carmakers and technology companies, including the US and China.

“It has nothing to do with any particular government — it has to do with the very nature of autonomy,” he said on the sidelines of the Consumer Electronics Show last week. “You have an object that is capable of moving by itself. By definition, it is a weapon.”

Charlie Stross figured this out ten years ago.

David. said...

“We will have autonomous cars on the road, I believe within the next 18 months,” [Uber CEO Khosrowshahi} said. ... for example, Phoenix, there will be 95% of cases where the company may not have everything mapped perfectly, or the weather might not be perfect, or there could be other factors that will mean Uber will opt to send a driver. “But in 5 percent of cases, we’ll send an autonomous car,” Khosrowshahi said, when everything’s just right, and still the user will be able to choose whether they get an AV or a regular car." reports Darrell Etherington at TechCrunch. Given that Uber loses $5B/yr and Khosrowshahi has 25 months to IPO it, you should treat everything he says as pre-IPO hype.

David. said...

Uber and Lyft want you banned from using your own self-driving car in urban areas is the title of a piece by Ethan Baron at siliconbeat. The geometric impossibility of replacing mass transit with fleets of autonomous cars is starting to sink in.

David. said...

Ross Marchand at Real Clear Policy looks into Waymo's reported numbers:

"The company’s headline figures since 2015 are certainly encouraging, with “all reported disengagements” dropping from .80 per thousand miles (PTM) driven to .18 PTM. Broken down by category, however, this four-fold decrease in disengagements appears very uneven. While the rate of technology failures has fallen by more than 90 percent (from .64 to .06), unsafe driving rates decreased only by 25 percent (from .16 to .12). ... But the ability of cars to analyze situations on the road and respond has barely shown improvement since the beginning of 2016. In key categories, like “incorrect behavior prediction” and “unwanted maneuver of the vehicle,” Waymo vehicles actually did worse in 2017 than in 2016."