Tuesday, November 14, 2017

Techno-hype part 1

Don't, don't, don't, don't believe the hype!
Public Enemy

New technologies are routinely over-hyped because people under-estimate the gap between a technology that works and a technology that is in everyday use by normal people.

You have probably figured out that I'm skeptical of the hype surrounding blockchain technology. Despite incident-free years spent routinely driving in company with Waymo's self-driving cars, I'm also skeptical of the self-driving car hype. Below the fold, an explanation.

Clearly, self-driving cars driven by a trained self-driving car driver in Bay Area traffic work fine:
We've known for several years now that Waymo's (previously Google's) cars can handle most road conditions without a safety driver intervening. Last year, the company reported that its cars could go about 5,000 miles on California roads, on average, between human interventions.
Crashes per 100M miles
Waymo's cars are much safer than almost all human drivers:
Waymo has logged over two million miles on U.S. streets and has only had fault in one accident, making its cars by far the lowest at-fault rate of any driver class on the road— about 10 times lower than our safest demographic of human drivers (60–69 year-olds) and 40 times lower than new drivers, not to mention the obvious benefits gained from eliminating drunk drivers.

However, Waymo’s vehicles have a knack for getting hit by human drivers. When we look at total accidents (at fault and not), the Waymo accident rate is higher than the accident rate of most experienced drivers ... Most of these accidents are fender-benders caused by humans, with no fatalities or serious injuries. The leading theory is that Waymo’s vehicles adhere to the letter of traffic law, leading them to brake for things they are legally supposed to brake for (e.g., pedestrians approaching crosswalks). Since human drivers are not used to this lawful behavior, it leads to a higher rate of rear-end collisions (where the human driver is at-fault).
Clearly, this is a technology that works. I would love it if my grand-children never had to learn to drive, but even a decade from now I think they will still need to.

But, as Google realized some time ago, just being safer on average than most humans almost all the time is not enough for mass public deployment of self-driving cars. Back in June, John Markoff wrote:
Three years ago, Google’s self-driving car project abruptly shifted from designing a vehicle that would drive autonomously most of the time while occasionally requiring human oversight, to a slow-speed robot without a brake pedal, accelerator or steering wheel. In other words, human driving was no longer permitted.

The company made the decision after giving self-driving cars to Google employees for their work commutes and recording what the passengers did while the autonomous system did the driving. In-car cameras recorded employees climbing into the back seat, climbing out of an open car window, and even smooching while the car was in motion, according to two former Google engineers.

“We saw stuff that made us a little nervous,” Chris Urmson, a roboticist who was then head of the project, said at the time. He later mentioned in a blog post that the company had spotted a number of “silly” actions, including the driver turning around while the car was moving.

Johnny Luu, a spokesman for Google’s self-driving car effort, now called Waymo, disputed the accounts that went beyond what Mr. Urmson described, but said behavior like an employee’s rummaging in the back seat for his laptop while the car was moving and other “egregious” acts contributed to shutting down the experiment.
Gareth Corfield at The Register adds:
Google binned its self-driving cars' "take over now, human!" feature because test drivers kept dozing off behind the wheel instead of watching the road, according to reports.

"What we found was pretty scary," Google Waymo's boss John Krafcik told Reuters reporters during a recent media tour of a Waymo testing facility. "It's hard to take over because they have lost contextual awareness." ...

Since then, said Reuters, Google Waymo has focused on technology that does not require human intervention.
Timothy B. Lee at Ars Technica writes:
Waymo cars are designed to never have anyone touch the steering wheel or pedals. So the cars have a greatly simplified four-button user interface for passengers to use. There are buttons to call Waymo customer support, lock and unlock the car, pull over and stop the car, and start a ride.
But, during a recent show-and-tell with reporters, they weren't allowed to press the "pull over" button:
a Waymo spokesman tells Ars that the "pull over" button does work. However, the event had a tight schedule, and it would have slowed things down too much to let reporters push it.
Google was right to identify the "hand-off" problem as essentially insoluble, because the human driver would have lost "situational awareness".

Jean-Louis Gassée has an appropriately skeptical take on the technology, based on interviews with Chris Urmson:
Google’s Director of Self-Driving Cars from 2013 to late 2016 (he had joined the team in 2009). In a SXSW talk in early 2016, Urmson gives a sobering yet helpful vision of the project’s future, summarized by Lee Gomesin an IEEE Spectrum article [as always, edits and emphasis mine]:

“Not only might it take much longer to arrive than the company has ever indicated — as long as 30 years, said Urmson — but the early commercial versions might well be limited to certain geographies and weather conditions. Self-driving cars are much easier to engineer for sunny weather and wide-open roads, and Urmson suggested the cars might be sold for those markets first.”
But the problem is actually much worse than either Google or Urmson say. Suppose, for the sake of argument, that self-driving cars three times as good as Waymo's are in wide use by normal people. A normal person would encounter a hand-off once in 15,000 miles of driving, or less than once a year. Driving would be something they'd be asked to do maybe 50 times in their life.

Even if, when the hand-off happened, the human was not "climbing into the back seat, climbing out of an open car window, and even smooching" and had full "situational awareness", they would be faced with a situation too complex for the car's software. How likely is it that they would have the skills needed to cope, when the last time they did any driving was over a year ago, and on average they've only driven 25 times in their life? Current testing of self-driving cars hands-off to drivers with more than a decade of driving experience, well over 100,000 miles of it. It bears no relationship to the hand-off problem with a mass deployment of self-driving technology.

Remember the crash of AF447?
the aircraft crashed after temporary inconsistencies between the airspeed measurements – likely due to the aircraft's pitot tubes being obstructed by ice crystals – caused the autopilot to disconnect, after which the crew reacted incorrectly and ultimately caused the aircraft to enter an aerodynamic stall, from which it did not recover.
This was a hand-off to a crew that was highly trained, but had never before encountered a hand-off during cruise. What this means is that unrestricted mass deployment of self-driving cars requires Level 5 autonomy:
Level 5 _ Full Automation

System capability: The driverless car can operate on any road and in any conditions a human driver could negotiate. • Driver involvement: Entering a destination.
Note that Waymo is just starting to work with Level 4 cars (the link is to a fascinating piece by Alexis C. Madrigal on Waymo's simulation and testing program). There are many other difficulties on the way to mass deployment, outlined by Timothy B. Lee at Ars Technica. Although Waymo is actually testing Level 4 cars in the benign environment of Phoenix, AZ:
Waymo, the autonomous car company from Google’s parent company Alphabet, has started testing a fleet of self-driving vehicles without any backup drivers on public roads, its chief executive officer said Tuesday. The tests, which will include passengers within the next few months, mark an important milestone that brings autonomous vehicle technology closer to operating without any human intervention.
But the real difficulty is this. The closer the technology gets to Level 5, the worse the hand-off problem gets, because the human has less experience. Incremental progress in deployments doesn't make this problem go away. Self-driving taxis in restricted urban areas maybe in the next five years; a replacement for the family car, don't hold your breath. My grand-children will still need to learn to drive.

21 comments:

David. said...

Cecilia Kang's Where Self-Driving Cars Go To Learn looks at the free-for-all testing environment in Arizona:

"Over the past two years, Arizona deliberately cultivated a rules-free environment for driverless cars, unlike dozens of other states that have enacted autonomous vehicle regulations over safety, taxes and insurance.

Arizona took its anything-goes approach while federal regulators delayed formulating an overarching set of self-driving car standards, leaving a gap for states. The federal government is only now poised to create its first law for autonomous vehicles; the law, which echoes Arizona’s stance, would let hundreds of thousands of them be deployed within a few years and would restrict states from putting up hurdles for the industry."

What could possibly go wrong?

Mike K said...

It seems to me that that there's a "good enough" solution for mass deployment before Stage 5 is in production, provided that the "pull over button" works and that in all situations where you invoke concern about a human-driver-takeover, the AI can reliably default to avoiding hitting anything while it decelerates. That is, if the AI realizes it doesn't know how to handle the situation normally, it accepts defeat and comes to stop. (That seems to be the norm during current testing, based on my read of Madrigal's Waymo article.) If that's the case, humans don't suddenly have to take over a moving vehicle that's already in a boundary situation. Instead, having stopped, the AI can then reassess (if the confounding factors have changed) or the human can slowly drive out of proximity. Or perhaps such situations become akin to a flat tire is now--some people are capable of recovering on their own, others wait for roadside assistance.

Coming to a stop on, or even alongside, a highway is far from ideal, I concede, and will lead to more rear-enders as long as humans still drive some percentage of vehicles. But rear end accidents are far less likely to cause fatalities than other types (citation needed,) so that seems like an acceptable trade-off during a transitional period.

All that said, I'm cautiously pessimistic about self-driving cars in our lifetimes. I'm more worried about bugs, outages, and hacking preventing widespread implementation.

David. said...

"how much preparation have federal transportation authorities carried out to meet the challenge of the advent of self-driving cars and trucks? Not nearly enough, according to a new 44-page report by the Government Accountability Office, a Congressional watchdog agency." reports Paul Feldman. And:

"the U.S. House of Representatives has approved a bill allowing self-driving vehicles to operate on public roadways with minimal government supervision. Similar legislation has been OK’d by a Senate committee, but is currently stalled by a handful of senators concerned about safety provisions."

David. said...

In increasing order of skepticism, we have first A Decade after DARPA: Our View on the State of the Art in Self-Driving Cars by Bryan Salesky, CEO, Argo AI (Ford's self-driving effort):

"Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art or committed to the safe deployment of the technology."

Second, After Peak Hype, Self-Driving Cars Enter the Trough of Disillusionment by Aarian Marshall at Wired using Gartner’s “hype cycle” methodology:

"Volvo’s retreat is just the latest example of a company cooling on optimistic self-driving car predictions. In 2012, Google CEO Sergey Brin said even normies would have access to autonomous vehicles in fewer than five years—nope. Those who shelled out an extra $3,000 for Tesla’s Enhanced Autopilot are no doubt disappointed by its non-appearance, nearly six months after its due date. New Ford CEO Jim Hackett recently moderated expectations for the automaker’s self-driving service, which his predecessor said in 2016 would be deployed at scale by 2021. “We are going to be in the market with products in that time frame,” he told the San Francisco Chronicle. “But the nature of the romanticism by everybody in the media about how this robot works is overextended right now.”"

And third Wired: Self Driving Car Hype Crashes Into Harsh Realities by Yves Smith at naked capitalism, which is the only piece to bring up the hand-off problem:

"The fudge is to have a human at ready to take over the car in case it asks for help.

First, as one might infer, the human who is suddenly asked to intervene is going to have to quickly asses the situation. The handoff delay means a slower response than if a human had been driving the entire time. Second, and even worse, the human suddenly asked to take control might not even see what the emergency need is. Third, the car itself might not recognize that it is about to get into trouble."

All three pieces are worth reading.

David. said...

More skepticism from Christian Wolmar:

“This is a fantasy that has not been thought through, and is being promoted by technology and auto manufacturers because tech companies have vast amounts of footloose capital they don’t know what to do with, and auto manufacturers are terrified they’re not on board with the new big thing,” he said. “So billions are being spent developing technology that nobody has asked for, that will not be practical, and that will have many damaging effects.”

He has an entire book on the topic.

David. said...

Tim Bradshaw reports:

"Autonomous vehicles are in danger of being turned into “weapons”, leading governments around the world to block cars operated by foreign companies, the head of Baidu’s self-driving car programme has warned.

Qi Lu, chief operating officer at the Chinese internet group, said security concerns could become a problem for global carmakers and technology companies, including the US and China.

“It has nothing to do with any particular government — it has to do with the very nature of autonomy,” he said on the sidelines of the Consumer Electronics Show last week. “You have an object that is capable of moving by itself. By definition, it is a weapon.”

Charlie Stross figured this out ten years ago.

David. said...

“We will have autonomous cars on the road, I believe within the next 18 months,” [Uber CEO Khosrowshahi} said. ... for example, Phoenix, there will be 95% of cases where the company may not have everything mapped perfectly, or the weather might not be perfect, or there could be other factors that will mean Uber will opt to send a driver. “But in 5 percent of cases, we’ll send an autonomous car,” Khosrowshahi said, when everything’s just right, and still the user will be able to choose whether they get an AV or a regular car." reports Darrell Etherington at TechCrunch. Given that Uber loses $5B/yr and Khosrowshahi has 25 months to IPO it, you should treat everything he says as pre-IPO hype.

David. said...

Uber and Lyft want you banned from using your own self-driving car in urban areas is the title of a piece by Ethan Baron at siliconbeat. The geometric impossibility of replacing mass transit with fleets of autonomous cars is starting to sink in.

David. said...

Ross Marchand at Real Clear Policy looks into Waymo's reported numbers:

"The company’s headline figures since 2015 are certainly encouraging, with “all reported disengagements” dropping from .80 per thousand miles (PTM) driven to .18 PTM. Broken down by category, however, this four-fold decrease in disengagements appears very uneven. While the rate of technology failures has fallen by more than 90 percent (from .64 to .06), unsafe driving rates decreased only by 25 percent (from .16 to .12). ... But the ability of cars to analyze situations on the road and respond has barely shown improvement since the beginning of 2016. In key categories, like “incorrect behavior prediction” and “unwanted maneuver of the vehicle,” Waymo vehicles actually did worse in 2017 than in 2016."

David. said...

And also The most cutting-edge cars on the planet require an old-fashioned handwashing:

"For example, soap residue or water spots could effectively "blind" an autonomous car. A traditional car wash's heavy brushes could jar the vehicle's sensors, disrupting their calibration and accuracy. Even worse, sensors, which can cost over $100,000, could be broken.

A self-driving vehicle's exterior needs to be cleaned even more frequently than a typical car because the sensors must remain free of obstructions. Dirt, dead bugs, bird droppings or water spots can impact the vehicle's ability to drive safely."

David. said...

"[California]’s Department of Motor Vehicles said Monday that it was eliminating a requirement for autonomous vehicles to have a person in the driver’s seat to take over in the event of an emergency. ... The new rules also require companies to be able to operate the vehicle remotely ... and communicate with law enforcement and other drivers when something goes wrong." reports Daisuke Wakabayashi at the NYT. Note that these are not level 5 autonomous cars, they are remote-controlled.

David. said...

"Cruise vehicles "can't easily handle two-way residential streets that only have room for one car to pass at a time. That's because Cruise cars treat the street as one lane and always prefer to be in the center of a lane, and oncoming traffic causes the cars to stop."

Other situations that give Cruise vehicles trouble:

- Distinguishing between motorcycles and bicycles
- Entering tunnels, which can interfere with the cars' GPS sensors
- U-turns
- Construction zones"

From Timothy B. Lee's New report highlights limitations of Cruise self-driving cars. It is true that GM's Cruise is trying to self-drive in San Francisco, which isn't an easy place for humans. But they are clearly a long way from Waymo's level, even allowing for the easier driving in Silicon Valley and Phoenix.

David. said...

"While major technology and car companies are teaching cars to drive themselves, Phantom Auto is working on remote control systems, often referred to as teleoperation, that many see as a necessary safety feature for the autonomous cars of the future. And that future is closer than you might think: California will allow companies to test autonomous vehicles without a safety driver — as long as the car can be operated remotely — starting next month." from John R. Quain's When Self-Driving Cars Can’t Help Themselves, Who Takes the Wheel?.

So the car is going to call Tech Support and be told "All our operators are busy driving other cars. You call is important to us, please don't hang up."

David. said...

"Police in Tempe, Arizona, have released dash cam footage showing the final seconds before an Uber self-driving vehicle crashed into 49-year-old pedestrian Elaine Herzberg. She died at the hospital shortly afterward. ... Tempe police also released internal dash cam footage showing the car's driver, Rafaela Vasquez, in the seconds before the crash. Vasquez can be seen looking down toward her lap for almost five seconds before glancing up again. Almost immediately after looking up, she gets a look of horror on her face as she realizes the car is about to hit Herzberg." writes Timothy B. Lee at Ars Technica.

In this case the car didn't hand off to the human, but even if it had the result would likely have been the same.

David. said...

Timothy B. Lee at Ars Technica has analyzed the video and writes Video suggests huge problems with Uber’s driverless car program:

"The video shows that Herzberg crossed several lanes of traffic before reaching the lane where the Uber car was driving. You can debate whether a human driver should have been able to stop in time. But what's clear is that the vehicle's lidar and radar sensors—which don't depend on ambient light and had an unobstructed view—should have spotted her in time to stop.

On top of that, the video shows that Uber's "safety driver" was looking down at her lap for nearly five seconds just before the crash. This suggests that Uber was not doing a good job of supervising its safety drivers to make sure they actually do their jobs."

David. said...

"In a blogpost, Tesla said the driver of the sport-utility Model X that crashed in Mountain View, 38-year-old Apple software engineer Wei Huang, “had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision." reports The Guardian. The car tried to hand off to the driver but he didn't respond.

David. said...

“Technology does not eliminate error, but it changes the nature of errors that are made, and it introduces new kinds of errors,” said Chesley Sullenberger, the former US Airways pilot who landed a plane in the Hudson River in 2009 after its engines were struck by birds and who now sits on a Department of Transportation advisory committee on automation. “We have to realize that it’s not a panacea.” from the New York Times editorial The Bright, Shiny Distraction of Self-Driving Cars.

David. said...

In The way we regulate self-driving cars is broken—here’s how to fix it Timothy B. Lee sets out a very pragmatic approach to regulation of self-driving cars. Contrast this with the current rush to exempt them from regulations! For example:

"Anyone can buy a conventional car and perform safety tests on it. Academic researchers, government regulators, and other independent experts can take a car apart, measure its emissions, probe it for computer security flaws, and subject it to crash tests. This means that if a car has problems that aren't caught (or are even covered up) by the manufacturer, they're likely to be exposed by someone else.

But this kind of independent analysis won't be an option when Waymo introduces its driverless car service later this year. Waymo's cars won't be for sale at any price, and the company likely won't let customers so much as open the hood. This means that the public will be mostly dependent on Waymo itself to provide information about how its cars work."

David. said...

In People must retain control of autonomous vehicles Ashley Nunes, Bryan Reimer and Joseph F. Coughlin sound a warning against Level 5 self-driving vehicles and many strong cautions against rushed deployment of lower levels in two areas:

Liability:

"Like other producers, developers of autonomous vehicles are legally liable for damages that stem from the defective design, manufacture and marketing of their products. The potential liability risk is great for driverless cars because complex systems interact in ways that are unexpected."

Safety:

"Driverless cars should be treated much like aircraft, in which the involvement of people is required despite such systems being highly automated. Current testing of autonomous vehicles abides by this principle. Safety drivers are present, even though developers and regulators talk of full automation."

David. said...

Alex Roy's The Half-Life Of Danger: The Truth Behind The Tesla Model X Crash is a must-read deep dive into the details of the argument in this post, with specifics about Tesla's "Autopilot" and Cadillac's "SuperCruise":

"As I stated a year ago, the more such systems substitute for human input, the more human skills erode, and the more frequently a 'failure' and/or crash is attributed to the technology rather than human ignorance of it. Combine the toxic marriage of human ignorance and skill degradation with an increasing number of such systems on the road, and the number of crashes caused by this interplay is likely to remain constant—or even rise—even if their crash rate declines."

David. said...

A collection of posts about Stanford's autonomous car research is here. See, in particular, Holly Russell's research on the hand-off problem.