Tuesday, April 2, 2019

First We Change How People Behave

Then the system will work the way we want. My skepticism about Level 5 self-driving cars keeps getting reinforced. Below the fold, two recent examples.

The fundamental problem of autonomous vehicles sharing roads is that until you get to Level 5, you have a hand-off problem. The closer you get to Level 5, the worse the hand-off problem.

Source
Sean Gallagher's Lion Air 737 MAX crew had seconds to react, Boeing simulation finds shows the hand-off problem for aircraft:
In testing performed in a simulator, Boeing test pilots recreated the conditions aboard Lion Air Flight 610 when it went down in the Java Sea in October, killing 189 people. The tests showed that the crew of the 737 MAX 8 would have only had 40 seconds to respond to the Maneuvering Characteristics Augmentation System’s (MCAS’s) attempts to correct a stall that wasn’t happening before the aircraft went into an unrecoverable dive, according to a report by The New York Times.

While the test pilots were able to correct the issue with the flip of three switches, their training on the systems far exceeded that of the Lion Air crew—and that of the similarly doomed Ethiopian Airlines Flight 302, which crashed earlier this month. The Lion Air crew was heard on cockpit voice recorders checking flight manuals in an attempt to diagnose what was going on moments before they died.
Great, must-read journalism from Dominic Gates at the Seattle Times, Boeing's home-town newspaper in Flawed analysis, failed oversight: How Boeing and FAA certified the suspect 737 MAX flight control system shows that the fundamental problem with the 737 MAX was regulatory capture of the FAA by Boeing; the FAA's priority wasn't to make the 737 MAX safe, it was to get it into the market as quickly as possible because Airbus had a 9-month lead in this segment. And because Airbus' fly-by-wire planes minimize the need for expensive pilot re-training, Boeing's priority was to remove the need for it.
The company had promised Southwest Airlines Co. , the plane’s biggest customer, to keep pilot training to a minimum so the new jet could seamlessly slot into the carrier’s fleet of older 737s, according to regulators and industry officials.

[Former Boeing engineer Mr. [Rick] Ludtke [who worked on 737 MAX cockpit features] recalled midlevel managers telling subordinates that Boeing had committed to pay the airline $1 million per plane if its design ended up requiring pilots to spend additional simulator time. “We had never, ever seen commitments like that before,” he said.
The software fix Boeing just announced is just a patch on a fundamentally flawed design, as George Leopold reports in Software Won’t Fix Boeing’s ‘Faulty’ Airframe. Boeing is gaming the regulations, and the FAA let them do it. Neither placed safety first. These revelations should completely destroy the credibility of FAA certifications.

Although Boeing's highly-trained test pilots didn't have to RTFM, they did have only 40 seconds to diagnose and remedy the problem caused by the faulty angle-of-attack sensor and the buggy MCAS software. Inadequately trained Lion Air and Ethiopian Airlines pilots never stood a chance of a successful hand-off. Self-driving car advocates assume that hand-offs are initiated by the software recognizing a situation it can't handle. But in this case the MCAS software was convinced, on the basis of a faulty sensor, that it was handling the situation and refused to hand-off to the pilots 24 times in succession.

Self-driving car stopper
Self-driving cars drivers will lack even the level of training of the dead pilots. The cars' software is equally dependent upon sensors, which can be fooled by stickers on the road*, and cannot handle rain, sleet or snow. Or, as it turns out, pedestrians As David Zipper tweeted:
Atrios' apt comment was:
It is this type of thing which makes me obsess about this issue. And I have a couple insider sources (ooooh I am a real journalist) who confirm these concerns. The self-driving car people see pedestrians as a problem. I don't really understand how you can think urban taxis are your business model and also think walking is the enemy. Cities are made of pedestrians. Well, cities other than Phoenix, anyway. I pay a dumb mortgage so I can walk to a concert, like I did last night.
But no-one who matters cares about pedestrians because no-one who matters is ever on the sidewalk, let alone crossing the street. As the CDC reports:
In 2016, 5,987 pedestrians were killed in traffic crashes in the United States. This averages to one crash-related pedestrian death every 1.5 hours.

Additionally, almost 129,000 pedestrians were treated in emergency departments for non-fatal crash-related injuries in 2015. Pedestrians are 1.5 times more likely than passenger vehicle occupants to be killed in a car crash on each trip.
The casualties who don't "know what they can't do" won't add much to the deaths and injuries, so we can just go ahead and deploy the technology ASAP.



* Tesla says the "stickers on the road" attack:
is not a realistic concern given that a driver can easily override Autopilot at any time by using the steering wheel or brakes and should always be prepared to do so
Well, yes, but the technology is called "Autopilot" and Musk keeps claiming "full autonomy" is just around the corner.

50 comments:

  1. Sean Gallagher reports that:

    "Delivery of Boeing’s promised fix to the flight system software at the center of two 737 MAX crash investigations has been pushed back several weeks after an internal review by engineers not connected to the aircraft raised additional safety questions. The results of the “non-advocate” review have not been revealed, but the Federal Aviation Administration confirmed on April 1 that the software needed additional work."

    ReplyDelete
  2. Although they did RTFM, it looks like it didn't help:

    "Pilots at the controls of the Boeing Co. 737 MAX that crashed in March in Ethiopia initially followed emergency procedures laid out by the plane maker but still failed to recover control of the jet, according to people briefed on the probe’s preliminary findings."

    ReplyDelete
  3. In Whistleblowers: FAA 737 MAX safety inspectors lacked training, certification, Sean Gallagher reports that:

    "Multiple whistleblowers have raised issues over the Federal Aviation Administration’s safety inspection process connected to Boeing’s 737 MAX aircraft, according to a letter to the FAA from Senate Commerce Committee chairman Sen. Roger Wicker on April 2. And the FAA’s leadership was informed of these concerns as far back as August of 2018.

    The whistleblowers cited “insufficient training and improper certification” of FAA aviation safety inspectors, “including those involved in the Aircraft Evaluation Group (AEG) for the Boeing 737 MAX," Wicker said in his letter to FAA acting administrator David Elwell."

    Both Boeing and the FAA have serious credibility problems.

    ReplyDelete
  4. Izabella Kaminska and Jamie Powell Uber's conflicting self-driving fleet vision analyzes Uber's IPO documents and shows (a) Uber is betting the future on a fleet of Level 5 cars, and (b) the economics of this bet simply don't work (and of course neither does the technology):

    "But here's the really important factor for would-be buyers of the stock on IPO day. Uber says autonomous driving is essential for it to continue to effectively compete, but it also says these development efforts are capital and operations intensive (the opposite of its supposed asset-light business model today)."

    The quotes they emphasize from the IPO documents are fairly devastating.

    ReplyDelete
  5. Yet again William Gibson was prophetic. In Defense against the Darknet, or how to accessorize to defeat video surveillance, Thomas Claiburn describes a real-life version of the "ugliest T-shirt" from Gibson's Zero History.

    ReplyDelete
  6. Julie Bort's An engineer at Uber's self-driving car unit warns that it's more like 'a science experiment' than a real car capable of driving itself shows that in autonomous cars, like everything else, Uber is following the "fake it until you make it" path of today's Silicon Valley startups.

    And for the few in the audience who haven't read Gibson, the "ugliest T-shirt" makes the wearer invisible to surveillance cameras. Makes pedestrians even more of a problem for self-driving cars, no?

    ReplyDelete
  7. Another good post on the 737-MAX crashes is How the Boeing 737 Max Disaster Looks to a Software Developer by Gregory Travis:

    "So Boeing produced a dynamically unstable airframe, the 737 Max. That is big strike No. 1. Boeing then tried to mask the 737’s dynamic instability with a software system. Big strike No. 2. Finally, the software relied on systems known for their propensity to fail (angle-of-attack indicators) and did not appear to include even rudimentary provisions to cross-check the outputs of the angle-of-attack sensor against other sensors, or even the other angle-of-attack sensor. Big strike No. 3.

    ReplyDelete
  8. Christine Negroni's What people don’t get about why planes crash stresses the handoff problem:

    "In the crash of an Asiana Airlines Boeing 777 landing in San Francisco in 2013, investigators determined that a contributing factor was the pilots’ over-reliance on automated systems which led to an erosion in their flying skills. The investigation of the fatal flight of an Air France Airbus A330 from Rio de Janeiro to Paris in 2009 led to the conclusion that the complexity of the fly-by-wire airplane befuddled the pilots.

    The 737 Max probes suggest another variation on the conundrum: Technology intended to protect against pilot error trapped the pilots. Helpless in the cockpit, they were unable to do as Captain Sully did and save the day."

    ReplyDelete
  9. Southwest and FAA officials never knew Boeing turned off a safety feature on its 737 Max jets, and dismissed ideas about grounding them by Hillary Hoffower is based on reporting by Andy Pastzor of the WSJ:

    "Southwest Airlines and the Federal Aviation Administration (FAA) officials who monitor the carrier were unaware that a standard safety feature, designed to warn pilots about malfunctioning sensors, on Boeing 737 Max jets was turned off when Southwest began flying the model in 2017 ... In earlier 737 models, the safety feature alerted pilots when a sensor called the "angle-of-attack vane" incorrectly conveyed the pitch of the plane's nose, according to Pastzor. In the Max, it functions as such while also signaling when the Maneuvering Characteristics Augmentation System (MCAS) — a new automated system linked to both October's Lion Air crash and March's Ethiopian Airlines crash — could misfire; but these alerts were only enabled if carriers purchased additional safety features"

    And:

    "Like other airlines flying the Max, Southwest didn't learn about the change until the aftermath of the Lion Air crash, ... the carrier then asked Boeing to reactivate the alerts on its Max fleet, causing FAA inspectors to contemplate grounding the Max fleet until it was determined whether or not pilots needed additional training — but the idea was quickly dropped.

    Once the feature was reactivated, some FAA officials again considered grounding Southwest's 737 Max fleet to determine whether pilots needed new training — and again, the discussions, which happened via email, were dismissed after a few days"

    It is clear that the FAA's priority was Boeing's competitive position against Airbus, not safety. Additional training would have cost Boeing $1M a plane to Southwest, and would have cost Southwest probably more than that in increased costs covering the grounded planes and unavailable pilots.

    ReplyDelete
  10. As usual, Paul Vixie was way ahead of the curve. He wrote Disciplining the Unoccupied Mind in July 2016:

    "Simply put, if you give a human brain the option to perform other tasks than the one at hand, it will do so. No law, no amount of training, and no insistence by the manufacturer of an automobile will alter this fact. It's human nature, immalleable. So until and unless Tesla can robustly and credibly promise an autopilot that will imagine every threat a human could imagine, and can use the same level of caution as the best human driver would use, then the world will be better off without this feature."

    I wrote Techno-hype part 1 16 moths later, and this post 32 months later, both with esentially the same message.

    ReplyDelete
  11. Uber, Lyft, Waymo and many others believe that the key market for semi-autonomous (Level 4) cars is robo-taxis. Via Jamie Powell's The questionable economics of autonomous taxi fleets,

    "A new paper out Monday, written by researchers at the Massachusetts Institute of Technology and exclusively shared with FT Alphaville, agrees. It suggests that, at current prices, an automated hive of driverless taxis will actually be more expensive for a consumer to use than the old-world way of owning four wheels.

    Drawing on a wealth of publicly available data, Ashley Nunes and his colleague Kristen Hernandez suggest that the price for taking an autonomous taxi will be between $1.58 to $6.01 on a per-mile basis, versus the $0.72 cost of owning a car. Using San Francisco’s taxi market as its test area, the academics examined a vast array of costs such as licensing, maintenance, fuel and insurance for their calculations."

    Note the "San Francisco". Waymo can't actually make robo-taxis work in Phoenix. The big markets for taxis are old, dense cities such as San Francisco and New York. Nightmares even for human drivers (try driving through Chinatown in SF, or across Manhattan in rush hour).

    ReplyDelete
  12. Boeing Built Deadly Assumptions Into 737 Max, Blind to a Late Design Change is the New York Times longread on the process that led to the 737 MAX disasters. It is a story of a siloed organization, with people making safety-critical decisions based on partial or incorrect information about the system in question. It should make everyone think twice before flying on any Boeing plane:

    "But many people involved in building, testing and approving the system, known as MCAS, said they hadn’t fully understood the changes. Current and former employees at Boeing and the Federal Aviation Administration who spoke with The New York Times said they had assumed the system relied on more sensors and would rarely, if ever, activate. Based on those misguided assumptions, many made critical decisions, affecting design, certification and training."

    ReplyDelete
  13. Clive Irving's How Boeing’s Bean-Counters Courted the 737 MAX Disaster is another good article on how the crisis arose:

    "The origins of the 737 are particularly significant now, with Boeing engulfed in a world crisis of confidence with two crashes of the newest model, the 737 MAX-8, killing 346 people. Specifically, the origins of the design highlight the consequences to Boeing of believing that it could keep upgrading a 50-year-old design indefinitely."

    ReplyDelete
  14. April Glaser interviewed self-driving car pioneer Chris Urmson for How Close Are We to Self-Driving Cars, Really?. He didn't disagree with her question:

    "I’ve read that you think self-driving cars are about five to 10 years away from a small-scale rollout, but 30 to 50 years away from ubiquity, or a very large rollout."

    ReplyDelete
  15. Boeing's disregard of safety in manufacturing and slow-rolling of FAA oversight goes backmany years before the 737 MAX disasters, according to a long story by Michael Laris entitled Long before the Max disasters, Boeing had a history of failing to fix safety problems:

    "Repeatedly, safety lapses were identified, and Boeing would agree to fix them, then fail to do so, the FAA said."

    ReplyDelete
  16. In Boeing falsified records for 787 jet sold to Air Canada. It developed a fuel leak Katie Nicholson reports that:

    "Boeing staff falsified records for a 787 jet built for Air Canada which developed a fuel leak ten months into service in 2015.

    In a statement to CBC News, Boeing said it self-disclosed the problem to the U.S. Federal Aviation Administration after Air Canada notified them of the fuel leak.

    The records stated that manufacturing work had been completed when it had not."

    ReplyDelete
  17. Matt Stoller's The Coming Boeing Bailout? is a good overview of the way anti-trust failure corrupted Boeing:

    "The net effect of the merger, and the follow-on managerial and financial choices, is that America significantly damaged its aerospace industry. Where there were two competitors - McDonnell Douglas and Boeing, now there is one. And that domestic monopoly can no longer develop good civilian aerospace products. Hundreds of people are dead, and tens of billions of dollars wasted."

    ReplyDelete
  18. Jeffrey Rothfeder's For years, automakers wildly overpromised on self-driving cars and electric vehicles—what now? shows that realism about self-driving cars without trained self-driving car drivers is breaking out, now the Uber IPO is over:

    "Starting around May 2016, Uber projected in public and private presentations that it would manufacture 13,000 autonomous vehicles by 2019, only to change that forecast four months later to over 75,000 units. The company also said that human safety drivers, who take over the wheel when an AV needs help, would not be required on its cars by 2020. And in 2022, the company declared, tens of thousands of fully self-driving Uber taxis would be in 13 of the largest cities. ... the Uber employee responsible for the forecasts said that while she was designing them, executives had asked her “to think about a way” to show accelerated Uber AV development."

    But now:

    "CEO Dara Khosrowshahi said at an Economic Club meeting in Washington, DC, that it will take more than 50 years for all Uber cars to be driverless,"

    And:

    "Waymo’s CEO John Krafcik told a tech conference that it will be decades before autonomous cars are widespread on the roads, and they may always need human assistance to drive in multifaceted environments, such as bad weather or areas crowded with construction or emergency equipment."

    Told you so!

    ReplyDelete
  19. The details in Newly stringent FAA tests spur a fundamental software redesign of Boeing’s 737 MAX flight controls seem somewhat confused, but apparently the fact that MCAS, unlike earlier flight control systems, can override the pilots in ways from which they may be unable to recover means that the fundamental architecture of the 737's flight control software is no longer adequate. The FAA is requiring that the software be re-architected to be more resilient to failures. If so, the predicitions of an early return to service are highly optimistic.

    ReplyDelete
  20. Gareth Corfield at The Register has more details on the 737 MAX software re-architecture:

    "Astonishingly, until the 737 Max crashes, the aircraft was flying with no redundancy at all for the flight control computers. If the active one failed or suffered inversion of critical bits in memory, there was no standby unit ready to cut in and continue. The Seattle Times reported that this has now been redesigned so the two onboard computers run in an active:standby configuration. Previously the units merely swapped over in between flights.

    In addition, the computers will receive input from both angle-of-attack sensors rather than just the one. A faulty AoA sensor is thought to have been a contributory factor to the 737 Max crashes, which together cost more than 300 lives."

    ReplyDelete
  21. Andy Greenberg's A Boeing Code Leak Exposes Security Flaws Deep in a 787's Guts reports that:

    "Santamarta claims that leaked code has led him to something unprecedented: security flaws in one of the 787 Dreamliner's components, deep in the plane's multi-tiered network. He suggests that for a hacker, exploiting those bugs could represent one step in a multi­stage attack that starts in the plane’s in-flight entertainment system and extends to highly protected, safety-critical systems like flight controls and sensors."

    This isn't an immediate threat to safety-critical systems which Boeing claims are firewalled:

    "But even granting Boeing's claims about its security barriers, the flaws Santamarta found are egregious enough that they shouldn't be dismissed, says Stefan Savage, a computer science professor at the University of California at San Diego, who is currently working with other academic researchers on an avionics cybersecurity testing platform. "The claim that one shouldn't worry about a vulnerability because other protections prevent it from being exploited has a very bad history in computer security," Savage says. "Typically, where there's smoke there's fire."

    Savage points in particular to a vulnerability Santamarta highlighted in a version of the embedded operating system VxWorks, in this case customized for Boeing by Honeywell."

    Maybe Boeing needs to pay software developers more than $9/hr.

    ReplyDelete
  22. Via Atrios, el gato malo has a good explanation of why, even if Level 5 self-driving were possible, Tesla's "full self-driving" is never going to be it.

    ReplyDelete
  23. Joining the pile on Tesla's robo-taxi BS, Keubiko's Tesla's Robotaxi Red Herring estimates the cost in crashes and deaths they're projecting:

    "Even if autonomous cars are as good has human drivers by 2023, is it reasonable or feasible to think that the news flow, consumer acceptance, politicians, and regulators will accept anywhere near these numbers? If a single Uber test vehicle death can send the industry into a tizzy, what would thousands of crashes per day and a death every 90 minutes or so look like? This even ignores the stats on the miles that would be owner-driven (in autonomous mode) and not “robotaxi”.

    As an analogue, look at what Boeing is dealing with on its 737 Max. Air travel is still statistically very safe, and the 737 Max had well over 40,000 flights before the two crashes within 5 months grounded (justifiably so) the fleet.

    Does anyone honestly believe that a newly emerging industry can withstand the news flow anywhere close to these numbers?"

    ReplyDelete
  24. How does an autonomous car work? Not so great by Youjin Shin, Chris Alcantara and Aaron Steckelberg at the WaPo is a great interactive explanation of many of the limitations of self-driving car technology other than the hand-off problem. Go check it out.

    ReplyDelete
  25. Jennifer Elias' Alphabet exec blames media for overhyping self-driving cars, even though Google drove the hype illustrates the slow dawning of the realization that Level 5 is nowhere close:

    "[Waymo] has dialed back its enthusiastic tone as it falls behind its original timeline for getting full self-driving cars on the road. The company said in 2017 that it wouldn't need to wait until 2020 ⁠— when analysts expected self-driving cars to go fully autonomous ⁠— but that it would give riders the ability within "months."

    Morgan Stanley cut its valuation on Waymo by 40% last month from $175 billion to $105 billion, concluding that the industry is moving toward commercialization slower than expected and that Waymo still relies on human safety drivers, which CNBC reported in August."

    Elias provides a timeline of Google's optimistic predictions of "full self-driving".

    ReplyDelete
  26. In Hailing a driverless ride in a Waymo, Ed Niedermeyer reports on his first ride in a fully driverless Waymo car, as part of their testing in a small part of Phoenix, AZ:

    "There were moments where the self-driving system’s driving impressed, like the way it caught an unprotected left turn just as the traffic signal turned yellow or how its acceleration matched surrounding traffic. The vehicle seemed to even have mastered the more human-like driving skill of crawling forward at a stop sign to signal its intent.

    Only a few typical quirks, like moments of overly cautious traffic spacing and overactive path planning, betrayed the fact that a computer was in control. A more typical rider, specifically one who doesn’t regularly practice their version of the driving Turing Test, might not have even noticed them."

    But he points out that:

    "In 2017, Waymo CEO John Krafcik declared on stage at the Lisbon Web Summit that “fully self-driving cars are here.” Krafcik’s show of confidence and accompanying blog post implied that the “race to autonomy” was almost over. But it wasn’t.

    Nearly two years after Krafcik’s comments, vehicles driven by humans — not computers — still clog the roads in Phoenix. The majority of Waymo’s fleet of self-driving Chrysler Pacifica minivans in Arizona have human safety drivers behind the wheel; and the few driverless ones have been limited to testing only."

    ReplyDelete
  27. Remember the Uber self-driving car that killed a woman crossing the street? The AI had no clue about jaywalkers by Katyanna Quach describes the NTSB report on the killing of a pedestrian by an Uber self-driving car in March 2018:

    "an investigation by the NTSB into the crash has pinpointed a likely major contributing factor: the code couldn't recognize her as a pedestrian, because she was not at an obvious designated crossing. Rather than correctly anticipating her movements as a person moving across the road, it ended up running right into her.

    “The system design did not include a consideration for jaywalking pedestrians,” the watchdog stated [PDF] in its write-up.”

    The penalty for jaywalking is death.

    ReplyDelete
  28. In Another company is dialing back expectations for self-driving taxis Timothy B Lee reports more realism dawning on the self-driving hype:

    "Daimler is planning to "rightsize" its spending on self-driving taxis, Chairman Ola Källenius said on Thursday. Getting self-driving cars to operate safely in complex urban environments has proved more challenging than people expected a few years ago, he admitted.

    "There has been a reality check setting in here," Källenius said, according to Reuters.

    He is just the latest executive to acknowledge that work on self-driving taxi technology is not progressing as fast as optimists expected two or three years ago. Earlier this year, Ford CEO Jim Hackett sought to dampen expectations for Ford's own self-driving vehicles. Industry leaders Waymo and GM's Cruise missed self-imposed deadlines to launch driverless commercial taxi services in 2018 and 2019, respectively.

    ReplyDelete
  29. In It's Tough To Make Predictions, Especially About The Future, Atrios links to a 2017 survey of car companies' timelines for their self-driving cars. It is an amazing display of irrational optimism.

    ReplyDelete
  30. In Spooky video shows self-driving cars being tricked by holograms Thor Benson reports:

    "Researchers from Ben-Gurion University of the Negev's (BGU) Cyber Security Research Center in Israel found that both semi-autonomous and fully autonomous cars stopped when they detected what they thought were humans in the street but were actually projections. They also projected a street sign onto a tree and fake lane markers onto the street to trick the cars. The research was published by the International Association for Cryptologic Research."

    ReplyDelete
  31. John Markoff reports in Ben Shneiderman's anti-autonomy campaign in A Case for Cooperation Between Machines and Humans:

    "Late last year, Dr. Shneiderman embarked on a crusade to convince the artificial intelligence world that it is heading in the wrong direction. In February, he confronted organizers of an industry conference on “Assured Autonomy” in Phoenix, telling them that even the title of their conference was wrong. Instead of trying to create autonomous robots, he said, designers should focus on a new mantra, designing computerized machines that are “reliable, safe and trustworthy.”

    There should be the equivalent of a flight data recorder for every robot, Dr. Shneiderman argued."

    ReplyDelete
  32. In Amazon/Zoox: consolidation crunch, Jamie Powell reports on how reality is breaking in to the self-driving car hype:

    "Apart from demonstrating, yet again, that a commercial deployment self-driving technology is still a dream in the eyes of a few starry-eyed technologists, the mooted acquisition also speaks to other emerging themes in the space.
    ...
    The first is that capital is king. Zoox had not only planned to build the brains of a self-driving car, but manufacture its own autonomous vehicle. The cash required for such a feat runs into the billions.
    ...
    Which ties into our second point – the self-driving car world will have to begin to consolidate. One, because there are arguably only two companies – Google and Amazon – that can support the sort of research and development intensity required without constantly returning to the capital markets. And second because a future where all cars operate on the same plain technologically, and can interact with the required state infrastructure, will require a level of standardisation within the industry which will naturally lend towards there being two active players at best. It is far more likely in 20 years the self-driving car technology suite – from software to sensors – resembles Boeing and Airbus’ stranglehold over the airliner space than the dispersed competitive landscape that currently exists."

    ReplyDelete
  33. Tom Krisher reports in Study: Autonomous vehicles won't make roads completely safe that:

    "A new study says that while autonomous vehicle technology has great promise to reduce crashes, it may not be able to prevent all mishaps caused by human error.

    Auto safety experts say humans cause about 94% of U.S. crashes, but the Insurance Institute for Highway Safety study says computer-controlled robocars will only stop about one-third of them."

    ReplyDelete
  34. The Technology Quarterly section of the Economist's current edition is a skeptical look at AI. One article is entitled Driverless cars show the limits of today’s AI:

    "The problem, says Rodney Brooks, an Australian roboticist who has long been sceptical of grand self-driving promises, is deep-learning approaches are fundamentally statistical, linking inputs to outputs in ways specified by their training data. That leaves them unable to cope with what engineers call “edge cases”—unusual circumstances that are not common in those training data. Driving is full of such oddities. Some are dramatic: an escaped horse in the road, say, or a light aircraft making an emergency landing on a highway (as happened in Canada in April). Most are trivial, such as a man running out in a chicken suit. Human drivers usually deal with them without thinking. But machines struggle."

    ReplyDelete
  35. Self-driving industry takes to the highway after robotaxi failure by Patrick McGee describes the current "pivot" by self-driving car companies:

    "Eleven years on, however, the industry still has little idea what to do with the technology, despite some big advances over the past decade. As the much-hyped, seven-year quest to develop a driverless Uber service has suffered several setbacks, the appetite is now switching beyond robotaxis in search of more profitable avenues.

    The sector is experiencing “autonomous disillusionment”, says Prescott Watson, principal at Maniv Mobility, an early-stage venture capital firm. Now, “the pitch is, ‘robotaxis are a pipe dream’, but let’s take this technology to do something more lucrative,” he adds.

    Investors are still interested in autonomy but the focus has shifted towards practical services such as grocery delivery, automated warehouse robots, and autonomous functions restricted to highways.
    ...
    That does not mean robotaxis are dead per se, but the idea is now on life-support. Aside from fringe efforts, the robotaxi dream is now confined to those with the major financial firepower of a tech company or car giant that can spend many more years on the effort."

    ReplyDelete
  36. Cars with "advanced driver assistance systems" aren't autonomous, but as Timothy B. Lee writes about tests conducted by AAA in New cars can stay in their lane—but might not stop for parked cars, they have run into the "hand-off problem":

    "the advanced driver-assistance systems (ADAS) on the latest cars still struggle to avoid collisions with parked vehicles. They tested cars from BMW, Kia, and Subaru; none consistently avoided running into a fake car partially blocking the travel lane.

    "All test drivers reached a general consensus that combining adaptive cruise and lane-keeping functionalities in a single system did not consistently enhance the driving experience," the report said. The vehicles made mistakes often enough that drivers often found the experience nerve-wracking rather than relaxing.

    Greg Brannon, a co-author of the AAA report, argues that a fundamental challenge with this kind of system is the need to maintain alertness. Human beings are terrible at paying continued attention behind the wheel of a car that mostly drives itself. So when (not if) these vehicles make a mistake, there's a heightened risk that the driver won't be paying close enough attention to recover safely.
    ...
    "Test drivers were sometimes taken by surprise and were required to retake full control in the middle of critical situations with little to no advance notice," the AAA report says."

    ReplyDelete
  37. Nicholas Vega adds to the financial trainwreck that is Uber with Uber’s self-driving car unit has made little progress despite $2.5B price tag:

    "Uber’s efforts to build a self-driving car have cost the company nearly $2.5 billion and still it’s nowhere close to putting a driverless car on the road, according to a new report.

    The ride-hail giant’s Advance Technologies Group has been beset by infighting and setbacks, the Information reports, leading to fears that rivals like Alphabet-owned Waymo and Apple’s self-driving tech may soon leave it in the dust.

    Despite the team first beginning its research in 2015, Uber’s self-driving car “doesn’t drive well” and “struggles with simple routines and simple maneuvers,” a manager in the unit told CEO Dara Khosrowshahi, the report said."

    ReplyDelete
  38. Brad Templeton's Tesla’s ‘Full Self-Driving’ Is 99.9% There, Just 1,000 Times Further To Go sums it up:

    "There are also several necessary disengagements — where the human driver has to grab the wheel and take control to avoid a very likely accident — in these videos. While no statistics are available about how frequently those are needed, it appears to be reasonably frequent. This is the norm with systems that require constant manual supervision, and is why they need that supervision. All the full robocar projects have also required one (or really two) “safety drivers” behind the wheel who also have needed to do such interventions, frequently at first, and less and less frequently as time goes on. Only recently have Waymo and now Nuro deployed vehicles with no supervising driver on-board. (Cruise recently got the permit to do this but has not yet done it, though they claim they might by the end of this year. Ama/Zoox also has such a permit.)

    Based on the videos and claims by Tesla of it commonly taking Elon Musk on his commute with few interventions and sometimes none, I threw out the number 99.9% in the headline. This is not a precisely calculated number, but a proxy for “seems to work most of the time.” In reality, we would want to calculate how often it is needing interventions."

    ReplyDelete
  39. In Insurance Companies Flag “Driver Disengagement” as Factor in Robot Car Safety Lambert Strether discusses a report from the Insurance Institute for Highway Safety entitled Disengagement from driving when using automation during a 4-week field trial:

    "The current study assessed how driver disengagement, defined as visual-manual interaction with electronics or removal of hands from the wheel, differed as drivers became more accustomed to partial automation over a 4-week trial.
    ...
    The longer drivers used partial automation, the more likely they were to become disengaged by taking their hands off the wheel, using a cellphone, or interacting with in-vehicle electronics. Results associated with use of the two ACC systems diverged, with drivers in the S90 exhibiting less disengagement with use of ACC compared with manual driving, and those in the Evoque exhibiting more."

    ReplyDelete
  40. Atrios has a long memory. Four-and-a-half years ago the LA Times wrote:

    "You’re out on the town. You hail a taxi with an app. A cushy vehicle shows up with no steering wheel, no gas pedal, no brake pedal and no driver.

    You’ve heard those ambitious plans spelled out for some pie-in-the-sky future.

    Now, Ford Motor Co. says it will make it happen, and soon.

    The Detroit automaker revealed in broad strokes Tuesday an ambitious strategy to make fully autonomous cars available for sale by 2021. At first they’ll be used for ride sharing and ride hailing, with sales to individual drivers an indeterminate number of years after that, Ford Chief Executive Mark Fields said."

    We were promised flying cars. We don't even have pie-in-the-sky.

    ReplyDelete
  41. In Waymo CEO: Building safe driverless cars is harder than rocket science, Patrick McGee reports that reality is breaking in even on Waymo:

    "Last year was the most significant yet in Waymo’s 11-year effort to develop a driverless car.

    The Google sister company raised $3.2 billion, signed deals with several partners, and launched the world’s first truly driverless taxi service in Phoenix, Arizona.

    Even so, the widespread rollout of fully autonomous vehicles remains slow, staggered, and costly.

    “It’s an extraordinary grind,” said John Krafcik, Waymo chief executive, in an interview with the Financial Times. “I would say it’s a bigger challenge than launching a rocket and putting it in orbit around the Earth... because it has to be done safely over and over and over again.”

    Gone is the optimism of just a couple of years ago. In March 2018, Waymo confidently forecast that “up to 20,000” electric Jaguars “will be built in the first two years of production and be available for riders of Waymo’s driverless service, serving a potential 1 million trips per day.”

    Two months later, it added that “up to 62,000” Chrysler minivans would join its driverless fleet, “starting in late 2018.”

    Today, there is little sign that any of these vehicles have been ordered, and Waymo’s official fleet size remains just 600."

    ReplyDelete
  42. Semantics matter, as shown in Waymo shelves 'self-driving' term for its technology to shore up safety:

    "Waymo swears it's not out to pick nits and give us all an exercise in linguistics. The fact it will no longer use the term "self-driving" when describing its technology is about education and safety, Alphabet Inc.'s division devoted to the technology said Wednesday. Going forward, Waymo will call its technology "fully autonomous" to create, what it believes, is an important distinction.

    The company's argument rests entirely on how the public perceives "self-driving" as a term. Waymo points out, without naming names, that some automakers -- Tesla comes to mind -- toss the phrase around even though its technology doesn't fully drive a car on its own. Worse, Waymo said the proliferation of "self-driving" can lead to drivers taking their hands off the wheel when it's unsafe to do so. We've seen examples of this in the past few years already."

    ReplyDelete
  43. Timothy B. Lee's Self-driving startups are becoming an endangered species reports on the on-going industry shake-out:

    "Voyage was part of a wave of self-driving startups that were founded between 2013 and 2018. Cruise itself was one of the earliest of these companies; it was co-founded in 2013 by its current CEO Kyle Vogt. Others included nuTonomy in 2013, Zoox in 2014, Drive.ai, Optimus Ride, and TuSimple in 2015, Starsky Robotics, Nuro and Udelv in 2016, Voyage, Aurora, and May Mobility in 2017, and Ike and Kodiak Robotics in 2018.

    But over the last three years, these companies have suffered a high attrition rate. Cruise was acquired by GM in 2016. This early acquisition was a sign of confidence in Cruise, and GM has since poured billions of dollars into the startup. Similarly, auto parts maker Aptiv acquired NuTonomy in 2017 and has made its CEO the leader of Motional, a joint venture with Hyundai.

    Other startups didn't have such happy exits. Apple acquired Drive.ai in 2019 as the firm was on the verge of shutting down. Trucking startup Starsky shut down last year, and Amazon bought Zoox for a bargain price. Ike sold to its larger startup rival Nuro in late 2020.

    That leaves several companies still in the field, including trucking startups Kodiak and TuSimple, delivery ventures Udelv and Nuro, and passenger firms Optimus Ride, Voyage, Aurora, and May Mobility. But they're all facing the same basic problem: getting to market is taking a lot longer than most of them expected."

    ReplyDelete
  44. Timothy B. Lee reports that Tesla: “Full self-driving beta” isn’t designed for full self-driving:

    "Despite the "full self-driving" name, Tesla admitted it doesn't consider the current beta software suitable for fully driverless operation. The company said it wouldn't start testing "true autonomous features" until some unspecified point in the future.
    ...
    Tesla added that "we do not expect significant enhancements" that would "shift the responsibility for the entire dynamic driving task to the system." The system "will continue to be an SAE Level 2, advanced driver-assistance feature."

    SAE level 2 is industry jargon for a driver-assistance systems that perform functions like lane-keeping and adaptive cruise control. By definition, level 2 systems require continual human oversight. Fully driverless systems—like the taxi service Waymo is operating in the Phoenix area—are considered level 4 systems."

    As the real-time slippage of the schedule indicates, the suckers who paid for Tesla's FSD and:

    "assume that the software Tesla named "Full Self Driving beta" is, in fact, a beta version of Tesla's long-awaited fully self-driving software. But in its communications with California officials, Tesla makes it clear that's not true."

    Lee recapitulates the argument of this post in discussing Tesla's approach of evolving a Level 2 system to Level 5:

    "companies like Waymo argue that it's too difficult to get regular customers to pay close attention to an almost-but-not-fully driverless vehicle. If a car drives perfectly for 1,000 miles and then makes a big mistake, there's a significant risk the human driver won't be paying close enough attention to prevent a crash.
    ...
    Musk has always shrugged this critique off. As we've seen, he believes improvements to Autopilot's driver-assistance features will transform it into a system capable of fully driverless operation.

    But in its comments to the DMV, Tesla seems to endorse the opposite viewpoint: that adding "true autonomous features" to Autopilot will require more than just incrementally improving the performance of its existing software.
    ...
    And this makes it a little hard to believe Musk's boast that Tesla will achieve level 5 autonomy by the end of 2021."

    ReplyDelete
  45. Jonathan Gitlin's Strapping a giant teddy bear to a car in the name of highway safety records an interesting experiment with Level 2 driver assistance:

    "It turned out that experienced users were much more likely to notice the bear than either of the inexperienced groups. And among the inexperienced groups, those drivers who were tested while not using the Level 2 system were also more likely to notice the bear than the inexperienced group that did use the system. In fact, the inexperienced group that did use the Level 2 system were twice as likely to not spot the bear at all (compared to both the other groups). This suggests that there's a sweet spot on the learning curve between complete beginners and the superusers who have been blamed for fatal Autopilot crashes after learning how to exploit such systems."

    ReplyDelete
  46. Kyle Wiggers reports that MIT task force predicts fully autonomous vehicles won’t arrive for ‘at least’ 10 years:

    "The faculty and student research team of more than 20 members, as well as an external advisory board, published its latest brief today, focusing on the development of autonomous vehicles. It suggests fully driverless systems will take at least a decade to deploy over large areas and that expansion will happen region-by-region in specific transportation categories, resulting in variations in availability across the country."

    ReplyDelete
  47. Jaime Powell looks at Lyft's sale of its self-driving unit to Toyota in Lyft/Uber: robotaxi tap-out:

    "not only is the technology still at least half a decade away from having full commercial application, but that the cost needed to get there could easily run into the billions. In practice, that means we’re due for more consolidation in the space, as those with the deepest pockets — whether that be Amazon, Toyota, or perhaps even Google — continue to pick off the smaller names.

    But even if we do eventually enter to a robotaxi future of wonderment, will it make any economic sense? That’s a question for now that still remains unanswered. It’s easy to imagine Waymo licensing its self-driving technology to, say, BMW. It’s harder to imagine who will clean, maintain, manage, regulate and assume liability for a fleet of AI robotaxis."

    ReplyDelete
  48. Unlike Tesla, Waymo is taking a responsible approach to autonomous vehicles. In Waymo Public Road Safety Performance Data, Matthew Schwall et al document:

    "more than 6.1 million miles of automated driving in the Phoenix, Arizona metropolitan area, including operations with a trained operator behind the steering wheel from calendar year 2019 and 65,000 miles of driverless operation without a human behind the steering wheel from 2019 and the first nine months of 2020. The paper includes every collision and minor contact experienced during these operations as well as every predicted contact identified using Waymo’s counterfactual (“whatif”) simulation of events had the vehicle’s trained operator not disengaged automated driving. There were 47 contact events that occurred over this time period, consisting of 18 actual and 29 simulated contact events, none of which would be expected to result in severe or life-threatening injuries.
    ...
    Nearly all the events involved one or more road rule violations or other errors by a human driver or road user, including all eight of the most severe events (which we define as involving actual or expected airbag deployment in any involved vehicle). When compared to national collision statistics, the Waymo Driver completely avoided certain collision modes that human-driven vehicles are frequently involved in, including road departure and collisions with fixed objects."

    ReplyDelete
  49. The Costly Pursuit of Self-Driving Cars Continues On. And On. And On. by Cade Metz summarizes the way expectations for autonomous vehicles are being scaled back:

    "The wizards of Silicon Valley said people would be commuting to work in self-driving cars by now. Instead, there have been court fights, injuries and deaths, and tens of billions of dollars spent on a frustratingly fickle technology that some researchers say is still years from becoming the industry’s next big thing."

    ReplyDelete