Tuesday, November 14, 2017

Techno-hype part 1

Don't, don't, don't, don't believe the hype!
Public Enemy

New technologies are routinely over-hyped because people under-estimate the gap between a technology that works and a technology that is in everyday use by normal people.

You have probably figured out that I'm skeptical of the hype surrounding blockchain technology. Despite incident-free years spent routinely driving in company with Waymo's self-driving cars, I'm also skeptical of the self-driving car hype. Below the fold, an explanation.

Clearly, self-driving cars driven by a trained self-driving car driver in Bay Area traffic work fine:
We've known for several years now that Waymo's (previously Google's) cars can handle most road conditions without a safety driver intervening. Last year, the company reported that its cars could go about 5,000 miles on California roads, on average, between human interventions.
Crashes per 100M miles
Waymo's cars are much safer than almost all human drivers:
Waymo has logged over two million miles on U.S. streets and has only had fault in one accident, making its cars by far the lowest at-fault rate of any driver class on the road— about 10 times lower than our safest demographic of human drivers (60–69 year-olds) and 40 times lower than new drivers, not to mention the obvious benefits gained from eliminating drunk drivers.

However, Waymo’s vehicles have a knack for getting hit by human drivers. When we look at total accidents (at fault and not), the Waymo accident rate is higher than the accident rate of most experienced drivers ... Most of these accidents are fender-benders caused by humans, with no fatalities or serious injuries. The leading theory is that Waymo’s vehicles adhere to the letter of traffic law, leading them to brake for things they are legally supposed to brake for (e.g., pedestrians approaching crosswalks). Since human drivers are not used to this lawful behavior, it leads to a higher rate of rear-end collisions (where the human driver is at-fault).
Clearly, this is a technology that works. I would love it if my grand-children never had to learn to drive, but even a decade from now I think they will still need to.

But, as Google realized some time ago, just being safer on average than most humans almost all the time is not enough for mass public deployment of self-driving cars. Back in June, John Markoff wrote:
Three years ago, Google’s self-driving car project abruptly shifted from designing a vehicle that would drive autonomously most of the time while occasionally requiring human oversight, to a slow-speed robot without a brake pedal, accelerator or steering wheel. In other words, human driving was no longer permitted.

The company made the decision after giving self-driving cars to Google employees for their work commutes and recording what the passengers did while the autonomous system did the driving. In-car cameras recorded employees climbing into the back seat, climbing out of an open car window, and even smooching while the car was in motion, according to two former Google engineers.

“We saw stuff that made us a little nervous,” Chris Urmson, a roboticist who was then head of the project, said at the time. He later mentioned in a blog post that the company had spotted a number of “silly” actions, including the driver turning around while the car was moving.

Johnny Luu, a spokesman for Google’s self-driving car effort, now called Waymo, disputed the accounts that went beyond what Mr. Urmson described, but said behavior like an employee’s rummaging in the back seat for his laptop while the car was moving and other “egregious” acts contributed to shutting down the experiment.
Gareth Corfield at The Register adds:
Google binned its self-driving cars' "take over now, human!" feature because test drivers kept dozing off behind the wheel instead of watching the road, according to reports.

"What we found was pretty scary," Google Waymo's boss John Krafcik told Reuters reporters during a recent media tour of a Waymo testing facility. "It's hard to take over because they have lost contextual awareness." ...

Since then, said Reuters, Google Waymo has focused on technology that does not require human intervention.
Timothy B. Lee at Ars Technica writes:
Waymo cars are designed to never have anyone touch the steering wheel or pedals. So the cars have a greatly simplified four-button user interface for passengers to use. There are buttons to call Waymo customer support, lock and unlock the car, pull over and stop the car, and start a ride.
But, during a recent show-and-tell with reporters, they weren't allowed to press the "pull over" button:
a Waymo spokesman tells Ars that the "pull over" button does work. However, the event had a tight schedule, and it would have slowed things down too much to let reporters push it.
Google was right to identify the "hand-off" problem as essentially insoluble, because the human driver would have lost "situational awareness".

Jean-Louis Gassée has an appropriately skeptical take on the technology, based on interviews with Chris Urmson:
Google’s Director of Self-Driving Cars from 2013 to late 2016 (he had joined the team in 2009). In a SXSW talk in early 2016, Urmson gives a sobering yet helpful vision of the project’s future, summarized by Lee Gomesin an IEEE Spectrum article [as always, edits and emphasis mine]:

“Not only might it take much longer to arrive than the company has ever indicated — as long as 30 years, said Urmson — but the early commercial versions might well be limited to certain geographies and weather conditions. Self-driving cars are much easier to engineer for sunny weather and wide-open roads, and Urmson suggested the cars might be sold for those markets first.”
But the problem is actually much worse than either Google or Urmson say. Suppose, for the sake of argument, that self-driving cars three times as good as Waymo's are in wide use by normal people. A normal person would encounter a hand-off once in 15,000 miles of driving, or less than once a year. Driving would be something they'd be asked to do maybe 50 times in their life.

Even if, when the hand-off happened, the human was not "climbing into the back seat, climbing out of an open car window, and even smooching" and had full "situational awareness", they would be faced with a situation too complex for the car's software. How likely is it that they would have the skills needed to cope, when the last time they did any driving was over a year ago, and on average they've only driven 25 times in their life? Current testing of self-driving cars hands-off to drivers with more than a decade of driving experience, well over 100,000 miles of it. It bears no relationship to the hand-off problem with a mass deployment of self-driving technology.

Remember the crash of AF447?
the aircraft crashed after temporary inconsistencies between the airspeed measurements – likely due to the aircraft's pitot tubes being obstructed by ice crystals – caused the autopilot to disconnect, after which the crew reacted incorrectly and ultimately caused the aircraft to enter an aerodynamic stall, from which it did not recover.
This was a hand-off to a crew that was highly trained, but had never before encountered a hand-off during cruise. What this means is that unrestricted mass deployment of self-driving cars requires Level 5 autonomy:
Level 5 _ Full Automation

System capability: The driverless car can operate on any road and in any conditions a human driver could negotiate. • Driver involvement: Entering a destination.
Note that Waymo is just starting to work with Level 4 cars (the link is to a fascinating piece by Alexis C. Madrigal on Waymo's simulation and testing program). There are many other difficulties on the way to mass deployment, outlined by Timothy B. Lee at Ars Technica. Although Waymo is actually testing Level 4 cars in the benign environment of Phoenix, AZ:
Waymo, the autonomous car company from Google’s parent company Alphabet, has started testing a fleet of self-driving vehicles without any backup drivers on public roads, its chief executive officer said Tuesday. The tests, which will include passengers within the next few months, mark an important milestone that brings autonomous vehicle technology closer to operating without any human intervention.
But the real difficulty is this. The closer the technology gets to Level 5, the worse the hand-off problem gets, because the human has less experience. Incremental progress in deployments doesn't make this problem go away. Self-driving taxis in restricted urban areas maybe in the next five years; a replacement for the family car, don't hold your breath. My grand-children will still need to learn to drive.

31 comments:

  1. Cecilia Kang's Where Self-Driving Cars Go To Learn looks at the free-for-all testing environment in Arizona:

    "Over the past two years, Arizona deliberately cultivated a rules-free environment for driverless cars, unlike dozens of other states that have enacted autonomous vehicle regulations over safety, taxes and insurance.

    Arizona took its anything-goes approach while federal regulators delayed formulating an overarching set of self-driving car standards, leaving a gap for states. The federal government is only now poised to create its first law for autonomous vehicles; the law, which echoes Arizona’s stance, would let hundreds of thousands of them be deployed within a few years and would restrict states from putting up hurdles for the industry."

    What could possibly go wrong?

    ReplyDelete
  2. It seems to me that that there's a "good enough" solution for mass deployment before Stage 5 is in production, provided that the "pull over button" works and that in all situations where you invoke concern about a human-driver-takeover, the AI can reliably default to avoiding hitting anything while it decelerates. That is, if the AI realizes it doesn't know how to handle the situation normally, it accepts defeat and comes to stop. (That seems to be the norm during current testing, based on my read of Madrigal's Waymo article.) If that's the case, humans don't suddenly have to take over a moving vehicle that's already in a boundary situation. Instead, having stopped, the AI can then reassess (if the confounding factors have changed) or the human can slowly drive out of proximity. Or perhaps such situations become akin to a flat tire is now--some people are capable of recovering on their own, others wait for roadside assistance.

    Coming to a stop on, or even alongside, a highway is far from ideal, I concede, and will lead to more rear-enders as long as humans still drive some percentage of vehicles. But rear end accidents are far less likely to cause fatalities than other types (citation needed,) so that seems like an acceptable trade-off during a transitional period.

    All that said, I'm cautiously pessimistic about self-driving cars in our lifetimes. I'm more worried about bugs, outages, and hacking preventing widespread implementation.

    ReplyDelete
  3. "how much preparation have federal transportation authorities carried out to meet the challenge of the advent of self-driving cars and trucks? Not nearly enough, according to a new 44-page report by the Government Accountability Office, a Congressional watchdog agency." reports Paul Feldman. And:

    "the U.S. House of Representatives has approved a bill allowing self-driving vehicles to operate on public roadways with minimal government supervision. Similar legislation has been OK’d by a Senate committee, but is currently stalled by a handful of senators concerned about safety provisions."

    ReplyDelete
  4. In increasing order of skepticism, we have first A Decade after DARPA: Our View on the State of the Art in Self-Driving Cars by Bryan Salesky, CEO, Argo AI (Ford's self-driving effort):

    "Those who think fully self-driving vehicles will be ubiquitous on city streets months from now or even in a few years are not well connected to the state of the art or committed to the safe deployment of the technology."

    Second, After Peak Hype, Self-Driving Cars Enter the Trough of Disillusionment by Aarian Marshall at Wired using Gartner’s “hype cycle” methodology:

    "Volvo’s retreat is just the latest example of a company cooling on optimistic self-driving car predictions. In 2012, Google CEO Sergey Brin said even normies would have access to autonomous vehicles in fewer than five years—nope. Those who shelled out an extra $3,000 for Tesla’s Enhanced Autopilot are no doubt disappointed by its non-appearance, nearly six months after its due date. New Ford CEO Jim Hackett recently moderated expectations for the automaker’s self-driving service, which his predecessor said in 2016 would be deployed at scale by 2021. “We are going to be in the market with products in that time frame,” he told the San Francisco Chronicle. “But the nature of the romanticism by everybody in the media about how this robot works is overextended right now.”"

    And third Wired: Self Driving Car Hype Crashes Into Harsh Realities by Yves Smith at naked capitalism, which is the only piece to bring up the hand-off problem:

    "The fudge is to have a human at ready to take over the car in case it asks for help.

    First, as one might infer, the human who is suddenly asked to intervene is going to have to quickly asses the situation. The handoff delay means a slower response than if a human had been driving the entire time. Second, and even worse, the human suddenly asked to take control might not even see what the emergency need is. Third, the car itself might not recognize that it is about to get into trouble."

    All three pieces are worth reading.

    ReplyDelete
  5. More skepticism from Christian Wolmar:

    “This is a fantasy that has not been thought through, and is being promoted by technology and auto manufacturers because tech companies have vast amounts of footloose capital they don’t know what to do with, and auto manufacturers are terrified they’re not on board with the new big thing,” he said. “So billions are being spent developing technology that nobody has asked for, that will not be practical, and that will have many damaging effects.”

    He has an entire book on the topic.

    ReplyDelete
  6. Tim Bradshaw reports:

    "Autonomous vehicles are in danger of being turned into “weapons”, leading governments around the world to block cars operated by foreign companies, the head of Baidu’s self-driving car programme has warned.

    Qi Lu, chief operating officer at the Chinese internet group, said security concerns could become a problem for global carmakers and technology companies, including the US and China.

    “It has nothing to do with any particular government — it has to do with the very nature of autonomy,” he said on the sidelines of the Consumer Electronics Show last week. “You have an object that is capable of moving by itself. By definition, it is a weapon.”

    Charlie Stross figured this out ten years ago.

    ReplyDelete
  7. “We will have autonomous cars on the road, I believe within the next 18 months,” [Uber CEO Khosrowshahi} said. ... for example, Phoenix, there will be 95% of cases where the company may not have everything mapped perfectly, or the weather might not be perfect, or there could be other factors that will mean Uber will opt to send a driver. “But in 5 percent of cases, we’ll send an autonomous car,” Khosrowshahi said, when everything’s just right, and still the user will be able to choose whether they get an AV or a regular car." reports Darrell Etherington at TechCrunch. Given that Uber loses $5B/yr and Khosrowshahi has 25 months to IPO it, you should treat everything he says as pre-IPO hype.

    ReplyDelete
  8. Uber and Lyft want you banned from using your own self-driving car in urban areas is the title of a piece by Ethan Baron at siliconbeat. The geometric impossibility of replacing mass transit with fleets of autonomous cars is starting to sink in.

    ReplyDelete
  9. Ross Marchand at Real Clear Policy looks into Waymo's reported numbers:

    "The company’s headline figures since 2015 are certainly encouraging, with “all reported disengagements” dropping from .80 per thousand miles (PTM) driven to .18 PTM. Broken down by category, however, this four-fold decrease in disengagements appears very uneven. While the rate of technology failures has fallen by more than 90 percent (from .64 to .06), unsafe driving rates decreased only by 25 percent (from .16 to .12). ... But the ability of cars to analyze situations on the road and respond has barely shown improvement since the beginning of 2016. In key categories, like “incorrect behavior prediction” and “unwanted maneuver of the vehicle,” Waymo vehicles actually did worse in 2017 than in 2016."

    ReplyDelete
  10. And also The most cutting-edge cars on the planet require an old-fashioned handwashing:

    "For example, soap residue or water spots could effectively "blind" an autonomous car. A traditional car wash's heavy brushes could jar the vehicle's sensors, disrupting their calibration and accuracy. Even worse, sensors, which can cost over $100,000, could be broken.

    A self-driving vehicle's exterior needs to be cleaned even more frequently than a typical car because the sensors must remain free of obstructions. Dirt, dead bugs, bird droppings or water spots can impact the vehicle's ability to drive safely."

    ReplyDelete
  11. "[California]’s Department of Motor Vehicles said Monday that it was eliminating a requirement for autonomous vehicles to have a person in the driver’s seat to take over in the event of an emergency. ... The new rules also require companies to be able to operate the vehicle remotely ... and communicate with law enforcement and other drivers when something goes wrong." reports Daisuke Wakabayashi at the NYT. Note that these are not level 5 autonomous cars, they are remote-controlled.

    ReplyDelete
  12. "Cruise vehicles "can't easily handle two-way residential streets that only have room for one car to pass at a time. That's because Cruise cars treat the street as one lane and always prefer to be in the center of a lane, and oncoming traffic causes the cars to stop."

    Other situations that give Cruise vehicles trouble:

    - Distinguishing between motorcycles and bicycles
    - Entering tunnels, which can interfere with the cars' GPS sensors
    - U-turns
    - Construction zones"

    From Timothy B. Lee's New report highlights limitations of Cruise self-driving cars. It is true that GM's Cruise is trying to self-drive in San Francisco, which isn't an easy place for humans. But they are clearly a long way from Waymo's level, even allowing for the easier driving in Silicon Valley and Phoenix.

    ReplyDelete
  13. "While major technology and car companies are teaching cars to drive themselves, Phantom Auto is working on remote control systems, often referred to as teleoperation, that many see as a necessary safety feature for the autonomous cars of the future. And that future is closer than you might think: California will allow companies to test autonomous vehicles without a safety driver — as long as the car can be operated remotely — starting next month." from John R. Quain's When Self-Driving Cars Can’t Help Themselves, Who Takes the Wheel?.

    So the car is going to call Tech Support and be told "All our operators are busy driving other cars. You call is important to us, please don't hang up."

    ReplyDelete
  14. "Police in Tempe, Arizona, have released dash cam footage showing the final seconds before an Uber self-driving vehicle crashed into 49-year-old pedestrian Elaine Herzberg. She died at the hospital shortly afterward. ... Tempe police also released internal dash cam footage showing the car's driver, Rafaela Vasquez, in the seconds before the crash. Vasquez can be seen looking down toward her lap for almost five seconds before glancing up again. Almost immediately after looking up, she gets a look of horror on her face as she realizes the car is about to hit Herzberg." writes Timothy B. Lee at Ars Technica.

    In this case the car didn't hand off to the human, but even if it had the result would likely have been the same.

    ReplyDelete
  15. Timothy B. Lee at Ars Technica has analyzed the video and writes Video suggests huge problems with Uber’s driverless car program:

    "The video shows that Herzberg crossed several lanes of traffic before reaching the lane where the Uber car was driving. You can debate whether a human driver should have been able to stop in time. But what's clear is that the vehicle's lidar and radar sensors—which don't depend on ambient light and had an unobstructed view—should have spotted her in time to stop.

    On top of that, the video shows that Uber's "safety driver" was looking down at her lap for nearly five seconds just before the crash. This suggests that Uber was not doing a good job of supervising its safety drivers to make sure they actually do their jobs."

    ReplyDelete
  16. "In a blogpost, Tesla said the driver of the sport-utility Model X that crashed in Mountain View, 38-year-old Apple software engineer Wei Huang, “had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision." reports The Guardian. The car tried to hand off to the driver but he didn't respond.

    ReplyDelete
  17. “Technology does not eliminate error, but it changes the nature of errors that are made, and it introduces new kinds of errors,” said Chesley Sullenberger, the former US Airways pilot who landed a plane in the Hudson River in 2009 after its engines were struck by birds and who now sits on a Department of Transportation advisory committee on automation. “We have to realize that it’s not a panacea.” from the New York Times editorial The Bright, Shiny Distraction of Self-Driving Cars.

    ReplyDelete
  18. In The way we regulate self-driving cars is broken—here’s how to fix it Timothy B. Lee sets out a very pragmatic approach to regulation of self-driving cars. Contrast this with the current rush to exempt them from regulations! For example:

    "Anyone can buy a conventional car and perform safety tests on it. Academic researchers, government regulators, and other independent experts can take a car apart, measure its emissions, probe it for computer security flaws, and subject it to crash tests. This means that if a car has problems that aren't caught (or are even covered up) by the manufacturer, they're likely to be exposed by someone else.

    But this kind of independent analysis won't be an option when Waymo introduces its driverless car service later this year. Waymo's cars won't be for sale at any price, and the company likely won't let customers so much as open the hood. This means that the public will be mostly dependent on Waymo itself to provide information about how its cars work."

    ReplyDelete
  19. In People must retain control of autonomous vehicles Ashley Nunes, Bryan Reimer and Joseph F. Coughlin sound a warning against Level 5 self-driving vehicles and many strong cautions against rushed deployment of lower levels in two areas:

    Liability:

    "Like other producers, developers of autonomous vehicles are legally liable for damages that stem from the defective design, manufacture and marketing of their products. The potential liability risk is great for driverless cars because complex systems interact in ways that are unexpected."

    Safety:

    "Driverless cars should be treated much like aircraft, in which the involvement of people is required despite such systems being highly automated. Current testing of autonomous vehicles abides by this principle. Safety drivers are present, even though developers and regulators talk of full automation."

    ReplyDelete
  20. Alex Roy's The Half-Life Of Danger: The Truth Behind The Tesla Model X Crash is a must-read deep dive into the details of the argument in this post, with specifics about Tesla's "Autopilot" and Cadillac's "SuperCruise":

    "As I stated a year ago, the more such systems substitute for human input, the more human skills erode, and the more frequently a 'failure' and/or crash is attributed to the technology rather than human ignorance of it. Combine the toxic marriage of human ignorance and skill degradation with an increasing number of such systems on the road, and the number of crashes caused by this interplay is likely to remain constant—or even rise—even if their crash rate declines."

    ReplyDelete
  21. A collection of posts about Stanford's autonomous car research is here. See, in particular, Holly Russell's research on the hand-off problem.

    ReplyDelete
  22. "All companies testing autonomous vehicles on [California]’s public roads must provide annual reports to the DMV about “disengagements” that occur when a human backup driver has to take over from the robotic system. The DMV told eight companies with testing permits to provide clarification about their reports." from Ethan Barron's Self-driving cars’ shortcomings revealed in DMV reports. The clarifications are interesting, including such things as:

    "delayed perception of a pedestrian walking into the street"

    "failed to give way to another vehicle trying to enter a lane"

    "trouble when other drivers behaved badly. Other drivers had failed to yield, run stop signs, drifted out of their own lane and cut in front aggressively"

    ReplyDelete
  23. Angie Schmidt's How Uber’s Self-Driving System Failed to Brake and Avoid Killing Elaine Herzberg reports on the devastating NTSB report:

    "The report doesn’t assign culpability for the crash but it points to deficiencies in Uber’s self-driving car tests.

    Uber’s vehicle used Volvo software to detect external objects. Six seconds before striking Herzberg, the system detected her but didn’t identify her as a person. The car was traveling at 43 mph.

    The system determined 1.3 seconds before the crash that emergency braking would be needed to avert a collision. But the vehicle did not respond, striking Herzberg at 39 mph.

    NTSB writes:

    According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

    Amir Efrati at The Information cites two anonymous sources at Uber who say the company “tuned” its emergency brake system to be less sensitive to unidentified objects."

    People need to be jailed for this kind of irresponsibility.

    ReplyDelete
  24. Timothy B. Lee's As Uber and Tesla struggle with driverless cars, Waymo moves forward stresses how far ahead Waymo is in (mostly) self-driving cars:

    "So Waymo's recently announced car deals—20,000 cars from Jaguar Land Rover, another 62,000 from Fiat Chrysler—are just the latest sign that Waymo is assembling all the pieces it will need for a full-scale commercial taxi service in the Phoenix area and likely other places not long after that.

    It would be foolish for Waymo to invest so heavily in all this infrastructure if its technology were still years away from being ready for commercial deployment. Those 23 rider support workers need customers to talk to. And, of course, Waymo needs to get those 82,000 Jaguar and Chrysler vehicles on the road to avoid losing millions of dollars on the investment.

    Throughout all this, Waymo has been testing its vehicles at a faster and faster pace. It took Waymo six months to go from 3 million testing miles in May 2017 to 4 million miles in November. Then it took around three months to reach 5 million miles in February, and less than three months to reach 6 million in early May."

    ReplyDelete
  25. Timothy B. Lee's Why emergency braking systems sometimes hit parked cars and lane dividers makes the same point as my post, this time about "driver assistance" systems:

    "The fundamental issue here is that tendency to treat lane-keeping, adaptive cruise control, and emergency braking as independent systems. As we've seen, today's driver assistance systems have been created in a piecemeal fashion, with each system following a do-no-harm philosophy. They only intervene if they're confident they can prevent an accident—or at least avoid causing one. If they're not sure, they do nothing and let the driver make the decision.

    The deadly Tesla crash in Mountain View illustrates how dangerous this kind of system can be."

    Thus:

    "Once a driver-assistance system reaches a certain level of complexity, the assumption that it's safest for the system to do nothing no longer makes sense. Complex driver assistance systems can behave in ways that surprise and confuse drivers, leading to deadly accidents if the driver's attention wavers for just a few seconds. At the same time, by handling most situations competently, these systems can lull drivers into a false sense of security and cause them to pay less careful attention to the road."

    ReplyDelete
  26. "[Drive.AI board member Andrew Ng] seems to be saying that he is giving up on the promise of self-driving cars seamlessly slotting into the existing infrastructure. Now he is saying that every person, every “bystander”, is going to be responsible for changing their behavior to accommodate imperfect self-driving systems. And they are all going to have to be trained! I guess that means all of us.

    Whoa!!!!

    The great promise of self-driving cars has been that they will eliminate traffic deaths. Now [Ng] is saying that they will eliminate traffic deaths as long as all humans are trained to change their behavior? What just happened?

    If changing everyone’s behavior is on the table then let’s change everyone’s behavior today, right now, and eliminate the annual 35,000 fatalities on US roads, and the 1 million annual fatalities world-wide. Let’s do it today, and save all those lives."

    From Bothersome Bystanders and Self Driving Cars, Rodney Brooks' awesome takedown of Andrew Ng's truly stupid comments reported in Russell Brandom's Self-driving cars are headed toward an AI roadblock:

    "There’s growing concern among AI experts that it may be years, if not decades, before self-driving systems can reliably avoid accidents. As self-trained systems grapple with the chaos of the real world, experts like NYU’s Gary Marcus are bracing for a painful recalibration in expectations, a correction sometimes called “AI winter.” That delay could have disastrous consequences for companies banking on self-driving technology, putting full autonomy out of reach for an entire generation."

    ReplyDelete
  27. "Drive.ai plans to license its technology to others, and has struck a deal with Lyft, a ride-hailing firm, to operate vehicles in and around San Francisco. “I think the autonomous-vehicle industry should be upfront about recognising the limitations of today’s technology,” says Mr Ng. It is surely better to find pragmatic ways to work around those limitations than pretend they do not exist or promise that solving them will be easy." reports The Economist. They describe drive.ai's extremely constrained trial service:

    "Drive.ai, a startup, has deployed seven minivans to transport people within a limited area of the city that includes an office park and a retail area. ... All pick-ups and drop-offs happen at designated stops, to minimise disruption as passengers get on and off. ... The vans are painted a garish orange and clearly labelled as self-driving vehicles. ... Screens mounted on the vans’ exteriors let them communicate with pedestrians and other road users, ... Similarly, rather than trying to build a vehicle that can navigate roadworks (a notoriously difficult problem, given inconsistent signage), Drive.ai has arranged for the city authorities to tell it where any roadworks are each day, so that its vehicles can avoid them. ... Drive.ai will limit the service to daylight hours, which makes things simpler and safer. Each vehicle will initially have a safety driver, ... If a van gets confused it can stop and call for help: a remote supervisor then advises it how to proceed (rather than driving the vehicle remotely, which would not be safe, says Mr Ng).:

    It seems that Mr. Ng has learned from the response to his comments that it isn't our responsibility to avoid running into his cars.

    ReplyDelete
  28. In Even self-driving leader Waymo is struggling to reach full autonomy Timothy B. Lee reports on the "launch" of Waymo's "public" "autonomous" taxi service:

    "In late September, a Waymo spokeswoman told Ars by email that the Phoenix service would be fully driverless and open to members of the public—claims I reported in this article.

    We now know that Waymo One won't be fully driverless; there will be a driver in the driver's seat. And Waymo One is open to the public in only the narrowest, most technical sense: initially it will only be available to early riders—the same people who have been participating in Waymo's test program for months."

    Even in the benign environment of Phoenix, trained self-driving car drivers are still needed:

    "Over the course of October and November, Randazzo spent three days observing Waymo's cars in action—either by following them on the roads or staking out the company's depot in Chandler. He posted his findings in a YouTube video. The findings suggest that Waymo's vehicles aren't yet ready for fully autonomous operation."

    ReplyDelete
  29. Paris Marx writes in Self-Driving Cars Will Always Be Limited. Even the Industry Leader Admits it:

    "even Waymo’s CEO, John Krafcik, now admits that the self-driving car that can drive in any condition, on any road, without ever needing a human to take control — what’s usually called a “level 5” autonomous vehicle — will never exist. At the Wall Street Journal’s D.Live conference on November 13, Krafcik said that “autonomy will always have constraints.” It will take decades for self-driving cars to become common on roads, and even then they will not be able to drive in certain conditions, at certain times of the year, or in any weather. In short, sensors on autonomous vehicles don’t work well in snow or rain — and that may never change."

    ReplyDelete
  30. Christian Wolmar's My speech on driverless cars at the Transportation Research Board, Washington DC, 15/1/19 is a must-read debunking of the autonomous car hype by a respected British transport journalist. Among his many points:

    "Michael DeKort, an aerospace engineer turned whistleblower wrote recently:

    ‘Handover cannot be made safe no matter what monitoring and notification system is used. That is because enough time cannot be provided to regain proper situational awareness in critical scenarios.’"

    No-one could have predicted ...

    ReplyDelete
  31. Ashley Nunes' The Cost of Self-Driving Cars Will Be the Biggest Barrier to Their Adoption tackles the important question of whether, even if they can be made safe, self-driving cars can be affordable:

    "However, the systems underlying HAVs, namely sensors, radar, and communication devices, are costly compared to older (less safe) vehicles. This raises questions about the affordability of life-saving technology for those who need it most. While all segments of society are affected by road crashes, the risks are greatest for the poor. These individuals are more likely to die on the road partly because they own older vehicles that lack advanced safety features and have lower crash-test ratings.

    Some people have suggested that the inability to purchase HAVs outright may be circumvented by offering these vehicles for-hire. This setup, analogous to modern day taxis, distributes operating costs over a large number of consumers making mobility services more affordable. Self-driving technology advocates suggest that so-called robotaxis, operated by for-profit businesses, could produce considerable savings for consumers."

    Nunes computes that, even assuming the capital cost of a robotaxi is a mere $15K, the answer is public subsidy:

    "consumer subsidies will be crucial to realizing the life-saving benefits of this technology. Although politically challenging, public revenues already pay for a portion of road crash-related expenditures. In the United States alone, this amounts to $18 billion, the equivalent of over $156 in added taxes for every household."

    But to justify the subsidy, they have to be safe. Which brings us back to the hand-off problem.

    ReplyDelete