Source |
The U.S. government has opened a formal investigation into Tesla’s Autopilot partially automated driving system after a series of collisions with parked emergency vehicles.On the 19th Katyanna Quach reported that Senators urge US trade watchdog to look into whether Tesla may just be over-egging its Autopilot, FSD pudding:
The investigation covers 765,000 vehicles, almost everything that Tesla has sold in the U.S. since the start of the 2014 model year. Of the crashes identified by the National Highway Traffic Safety Administration as part of the investigation, 17 people were injured and one was killed.
NHTSA says it has identified 11 crashes since 2018 in which Teslas on Autopilot or Traffic Aware Cruise Control have hit vehicles at scenes where first responders have used flashing lights, flares, an illuminated arrow board or cones warning of hazards.
...
The agency has sent investigative teams to 31 crashes involving partially automated driver assist systems since June of 2016. Such systems can keep a vehicle centered in its lane and a safe distance from vehicles in front of it. Of those crashes, 25 involved Tesla Autopilot in which 10 deaths were reported, according to data released by the agency.
Sens. Edward Markey (D-MA) and Richard Blumenthal (D-CT) put out a public letter [PDF] addressed to FTC boss Lina Khan on Wednesday. In it, the lawmakers claimed "Tesla’s marketing has repeatedly overstated the capabilities of its vehicles, and these statements increasingly pose a threat to motorists and other users of the road."These are ridiculously late. Back in April, after reading Mack Hogan's Tesla's "Full Self Driving" Beta Is Just Laughably Bad and Potentially Dangerous, I wrote Elon Musk: Threat or Menace?:
I'm a pedestrian, cyclist and driver in an area infested with Teslas owned, but potentially not actually being driven, by fanatical early adopters and members of the cult of Musk. I'm personally at risk from these people believing that what they paid good money for was "Full Self Driving". When SpaceX tests Starship at their Boca Chica site they take precautions, including road closures, to ensure innocent bystanders aren't at risk from the rain of debris when things go wrong. Tesla, not so much.I'm returning to this topic because an excellent video and two new papers have shown that I greatly underestimated the depths of irresponsibility involved in Tesla's marketing.
Let me be clear. Tesla's transformation of electric cars from glorified golf carts to vehicles with better performance, features and economy than their conventional competitors is both an extraordinary engineering achievement and unambiguously good for the planet.
Family members drive a Model 3 and are very happy with it. This post is only about the systems that Tesla tells regulators are a Level 2 Automated Driver Assist System (ADAS) but that Tesla markets to the public as "Autopilot" and "Full Self-Driving".
Four years ago John Markoff wrote about Waymo's second thoughts about self-driving cars in Robot Cars Can’t Count on Us in an Emergency:
Three years ago, Google’s self-driving car project abruptly shifted from designing a vehicle that would drive autonomously most of the time while occasionally requiring human oversight, to a slow-speed robot without a brake pedal, accelerator or steering wheel. In other words, human driving was no longer permitted.As someone who was sharing the road with them, I can testify that seven years ago Waymo's cars were very good at self-driving, probably at least as good as Tesla's are now. But Waymo had run into two fundamental problems:
The company made the decision after giving self-driving cars to Google employees for their work commutes and recording what the passengers did while the autonomous system did the driving. In-car cameras recorded employees climbing into the back seat, climbing out of an open car window, and even smooching while the car was in motion, according to two former Google engineers.
- Over-trust or complacency. Markoff wrote:
Over-trust was what Google observed when it saw its engineers not paying attention during commutes with prototype self-driving cars. Driver inattention was implied in a recent National Highway Traffic Safety Administration investigation that absolved the Tesla from blame in a 2016 Florida accident in which a Model S sedan drove under a tractor-trailer rig, killing the driver.
The better the system works most of the time, the less likely the driver is to be paying attention when it stops working. - The hand-off problem. Markoff wrote:
Last month, a group of scientists at Stanford University presented research showing that most drivers required more than five seconds to regain control of a car when — while playing a game on a smartphone — they were abruptly required to return their attention to driving.
But as I wrote at the time:
Another group of Stanford researchers published research in the journal Science Robotics in December that highlighted a more subtle problem. Taking back control of a car is a very different experience at a high speed than at a low one, and adapting to the feel of the steering took a significant amount of time even when the test subjects were prepared for the handoff.
But the problem is actually much worse than either Google or Urmson say. Suppose, for the sake of argument, that self-driving cars three times as good as Waymo's are in wide use by normal people. A normal person would encounter a hand-off once in 15,000 miles of driving, or less than once a year. Driving would be something they'd be asked to do maybe 50 times in their life.
Even if, when the hand-off happened, the human was not "climbing into the back seat, climbing out of an open car window, and even smooching" and had full "situational awareness", they would be faced with a situation too complex for the car's software. How likely is it that they would have the skills needed to cope, when the last time they did any driving was over a year ago, and on average they've only driven 25 times in their life?
Most automated systems are reliable and usually work as advertised. Unfortunately, some may fail or behave unpredictably. Because such occurrences are infrequent, however, people will come to trust the automation. However, can there be too much trust? Just as mistrust can lead to disuse of alerting systems, excessive trust can lead operators to rely uncritically on automation without recognizing its limitations or fail to monitor the automation's behavior. Inadequate monitoring of automated systems has been implicated in several aviation incidents — for instance, the crash of Eastern Flight 401 in the Florida Everglades. The crew failed to notice the disengagement of the autopilot and did not monitor their altitude while they were busy diagnosing a possible problem with the landing gearThat was published seventeen years before Waymo figured it out.
Musk was not just selling a product he couldn't deliver, he was selling an investment that couldn't deliver — the idea that a "Full Self-Driving" Tesla would make its owner a profit by acting as an autonomous taxi. Tesla's marketeers faced a choice that should not have been hard, but obviously was. They could either tell Musk to back off his hype (and get fired), or go along. Going along required two new marketing techniques, "Autonowashing" the product and "Econowashing" the investment.
Autonowashing
Mahmood Hikmet's must-watch YouTube video Is Elon Musk Killing People? is an excellent introduction to the thesis of Liza Dixon's Autonowashing: The Greenwashing of Vehicle Automation:
According to a recent study, “automated driving hype is dangerously confusing customers”, and, further, “some carmakers are designing and marketing vehicles in such a way that drivers believe they can relinquish control” (Thatcham Research, 2018). Confusion created by OEMs via their marketing can be dangerous, “if the human believes that the automation has more capability than it actually has.” (Carsten and Martens, 2018). The motivations for this are clear: “Carmakers want to gain competitive edge by referring to ‘self-driving’ or ‘semi-autonomous’ capability in their marketing...” (Thatcham Research, 2018). As a result, a recent survey found that 71% of 1,567 car owners across seven different countries believed it was possible to purchase a “self-driving car” today (Thatcham Research, 2018).Dixon uses three case studies to illustrate autonowashing. First, media headlines:
Over the past decade, terms such as “autonomous”, “driverless”, and “self-driving” have made increasing appearances in media headlines. These buzzwords are often used by media outlets and OEMs to describe all levels of vehicle automation, baiting interest, sales and “driving traffic” to their respective sites. It is not uncommon to come across an article discussing Level 2 automation as “autonomous” or a testing vehicle as “driverless”, even though there is a human safety driver monitoring the vehicle and the environmentSecond, the Mercedes E class sedan:
In 2016, Mercedes-Benz launched a new advertising campaign called “The Future” in order to promote the new automated features launching in its E-Class sedan. The campaign stated:Mercedes pulled the campaign, in part because it appeared just after a fatal Autopilot crash and in part because consumer groups were pressuring the FTC.“Is the world truly ready for a vehicle that can drive itself? An autonomous-thinking automobile that protects those inside and outside. Ready or not, the future is here. The all new E-Class: self-braking, self-correcting, self-parking. A Mercedes-Benz concept that's already a reality.”The headline of one of the ads read, “Introducing a self-driving car from a very self-driven company.”
But primarily Dixon focuses on Tesla's marketing:
It is explicitly stated on the Tesla website and in the vehicle owner's manual in multiple instances that the driver must keep their hands on the wheel and their attention on the road ahead (Tesla, 2019b, 2019a). Despite these statements, Tesla is the only OEM currently marketing Level 2, ADAS equipped vehicles as “self-driving” (The Center for Auto Safety and Consumer Watchdog, 2018).
In October 2016, Tesla announced that “all Tesla vehicles produced in our factory...will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver” (Tesla Inc., 2016a) (see Fig. 2). This announcement also came with the sale of a new Autopilot option called “Full Self-Driving Capability” (FSD). Tesla stated that customers who purchased the FSD upgrade would not experience any new features initially but that in the future, this upgrade would enable the vehicle to be “fully self-driving” (Lee, 2019). This option was later removed, but then subsequently reintroduced for sale in February of 2019.
Dixon Fig. 3 |
Tesla's CEO Elon Musk has promoted “Full Self-Driving Capability” on his personal Twitter account, in one case stating “Tesla drives itself (no human input at all) thru urban streets to highway to streets, then finds a parking spot” without clarifying that this feature is not yet enabled (@ elonmusk, 2016). Further, Musk has been seen in multiple TV interviews (Bloomberg, 2014; CBS News, 2018) removing his hands from the wheel with Autopilot active. In one of these examples, he did so and stated, “See? It's on full Autopilot right now. No hands, no feet, nothing,” as he demonstrates the system to the interviewer, who is sitting in the passenger seat (Fig. 3) (Bloomberg, 2014). This behavior is at odds with appropriate use, and is explicitly warned against in the Tesla Owner's Manual (Tesla, 2019a).
Screen grab |
Lets get real. It is half a decade later and Gabrielle Coppola and Mark Bergen have just published Waymo Is 99% of the Way to Self-Driving Cars. The Last 1% Is the Hardest:
In 2017, the year Waymo launched self-driving rides with a backup human driver in Phoenix, one person hired at the company was told its robot fleets would expand to nine cities within 18 months. Staff often discussed having solved “99% of the problem” of driverless cars. “We all assumed it was ready,” says another ex-Waymonaut. “We’d just flip a switch and turn it on.”Musk wasn't alone in having excessively optimistic timelines, but he was alone in selling vehicles to consumers based on lying about their capabilities. This is bad enough, but the story Hikmet tells is worse. You need to watch his video for the details, but here is the outline (square brakets are timestamps for the video):
But it turns out that last 1% has been a killer. Small disturbances like construction crews, bicyclists, left turns, and pedestrians remain headaches for computer drivers. Each city poses new, unique challenges, and right now, no driverless car from any company can gracefully handle rain, sleet, or snow. Until these last few details are worked out, widespread commercialization of fully autonomous vehicles is all but impossible.
Tesla's description of "Autopilot" and "Full Self-Driving" reads:
Autopilot and Full Self-Driving Capability are intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment.In other words, when these automated systems are in use the driver must monitor their behavior and be ready to respond to any anomalies. Dixon writes:
There is a long-standing consensus in human-automation interaction literature which states that humans are generally poor monitors of automation (Bainbridge, 1983; Sheridan, 2002; Strand et al., 2014). Partial vehicle automation requires a shift in the role of the user, from manual control to a supervisory role. Thus, the demand on the user for monitoring increasesHumans aren't good at the monitoring task; because the system works well most of the time they become complacent. In Tesla's Q1 2018 earnings call Musk explained the problem [29:30]:
when there is a serious accident on autopilot people for some reason think that the driver thought the car was fully autonomous and we somehow misled them into thinking it was fully autonomous it is the opposite case when there is a serious accident maybe always the case that it is an experienced user ... the issue is more one of complacency like we just get too used to itThus it is necessary to equip vehicles with Driver Monitoring Systems (DMS), which ensure that the driver is actually paying attention to their assigned task. This has long been a standard in railroad practice. Hikmet's story is essentially about Tesla's DMS, and the conflict it posed between the need to ensure that customers were "fully attentive" at all times, and Elon Musk's irresponsible hype.
Other car companies' DMS are effective. They combine:
- Capacitative sensors ensuring that the driver's hands are on the wheel.
- A camera system looking at the driver, with image processing software ensuring that the driver is looking at the road.
- Infra-red illumination ensuring that the camera system continues to operate at night.
Tesla's solution to this dilemma was to implement a DMS using a torque sensor to determine whether the driver's hands were on the wheel. This suffered from two problems, it did not determine whether the driver was looking at the road, or even in the driver's seat, and the torque needed to activate the sensor was easy to provide with, as Consumer Reports did, a roll of tape [18:13]. Hikmet reports that specifically designed "Steering Wheel Boosters" are easily purchased on-line.
Musk's explanation for why Tesla hadn't adopted the industry standard DMS technology emerged in an April 2019 interview with Lex Freedman [31:19]:
Freedman: Do you see Tesla's Full Self-Driving for a time to come requiring supervision of the human being?Steve Jobs notoriously possessed a "reality distortion field", but it pales compared to Musk's. Not the "end of this year", not "next year at the latest", but two years after this interview NTHSA is investigating why Teslas crash into emergency vehicles and Mack Hogan, writing for the authoritative Road and Track, started an article:
Musk: I think it will require detecting hands on wheel for at least 6 months or something like that. ... The system's improving so much, so fast, that this is going to be a moot point very soon. If something's many times safer than a person, then adding a person the effect on safety is limited. And in fact it could be negative. ... I think it will become very quickly, maybe towards the end of this year, I'll be shocked if its not next year at the latest, having a human intervene will decrease safety.
if you thought "Full Self Driving" was even close to a reality, this video of the system in action will certainly relieve you of that notion. It is perhaps the best comprehensive video at illustrating just how morally dubious, technologically limited, and potentially dangerous Autopilot's "Full Self Driving" beta program is.Why can Musk make the ludicrous claim that Autopilot is safer than a human driver? Hikmet explains that it is because Tesla manipulates safety data to autonowash their technology [20:00]. The screengrab shows Tesla's claim that Autopilot is ten times safer than a human. Hikmet makes three points:
- Tesla compares a new, expensive car with the average car, which in the US is eleven years old. One would expect the newer car to be safer.
- Tesla compares Autopilot, which works only on the safest parts of the highway network, with the average car on all parts of the network.
- Tesla doesn't disclose whether its data includes Teslas used in other countries, almost all of which have much lower accident rates than the US.
Report highlight: "The data show that the Tesla vehicles crash rate dropped by almost 40% after Autosteer installation"Two years later, after they FOIA'ed the data, Quality Control Systems Corporation published a report entitled NHTSA's Implausible Safety Claim for Tesla's Autosteer Driver Assistance System:
we discovered that the actual mileage at the time the Autosteer software was installed appears to have been reported for fewer than half the vehicles NHTSA studied. For those vehicles that do have apparently exact measurements of exposure mileage both before and after the software's installation, the change in crash rates associated with Autosteer is the opposite of that claimed by NHTSA - if these data are to be believed.
For the remainder of the dataset, NHTSA ignored exposure mileage that could not be classified as either before or after the installation of Autosteer. We show that this uncounted exposure is overwhelmingly concentrated among vehicles with the least "before Autosteer" exposure. As a consequence, the overall 40 percent reduction in the crash rates reported by NHTSA following the installation of Autosteer is an artifact of the Agency's treatment of mileage information that is actually missing in the underlying dataset.
Source |
Freedman: Many in the industry believe you have to have camera-based driver monitoring. Do you think there could be benefit gained from driver monitoring?Musk's reality distortion field is saying Autopilot is "dramatically better ... than a human". Who are you going to believe — Musk or the guys in the 11 emergency vehicles hit by Teslas on Autopilot?
Musk: If you have a system that's at or below human-level reliability, then driver monitoring makes sense, but if your system is dramatically better, more reliable than a human then driver monitoring does not help much.
As Timothy B. Lee reported nearly a year ago in Feds scold Tesla for slow response on driver monitoring:
The National Transportation Safety Board, a federal agency tasked with investigating transportation crashes, published a published a preliminary report Tuesday about a January 2018 crash in Culver City, California. For the most part, the report confirmed what we already knew about the incident: a Tesla Model S with Autopilot engaged crashed into a fire truck at 31 miles per hour. Thankfully, no one was seriously injured.What the 2017 report said [42:04] was:
But near the end of its report, NTSB called Tesla out for failing to respond to a 2017 recommendation to improve its driver monitoring system.
monitoring steering wheel torque provides a poor surrogate means of determining the automated vehicle driver's degree of engagement with the driving taskThey recommended manufacturers "develop applications to more effectively the driver's level of engagement". Five of the manufacturers responded; Tesla didn't.
Source |
This Potemkin system likely won't be enough for China. Simon Sharwood reports that, under new rules announced this month:
Behind the wheel, drivers must be informed about the vehicle's capabilities and the responsibilities that rest on their human shoulders. All autonomous vehicles will be required to detect when a driver's hands leave the wheel, and to detect when it's best to cede control to a human.And, as a further illustration of how little importance Tesla attaches to the necessary belt and braces approach to vehicle safety, in May Telsa announced that their Model 3 and Model Y cars will no longer have radar. They will rely entirely on image processing from cameras. Removing a source of navigation data seems likely to impair the already inadequate performance of Autopilot and Full Self-Driving in marginal conditions. The kind of conditions that someone who takes Musk at his word would be likely to be using the systems.
As Hikmet says, people have died and will continue to die because Elon Musk regards driver monitoring as an admission of failure.
Econowashing
The stock market currently values Ford at around $50B, General Motors at around $70B, Toyota at around $237B and Volkswagen around at $165B. It currently values Tesla at about 1/3 more than these four giants of its industry combined. That is after its P/E dropped from a peak of almost 1,400 to its current 355, which is still incredible compared to established high-tech growth companies such as Nvidia (P/E 70). The market clearly can't be valuing Tesla on the basis that it makes and sells cars. That's a low-margin manufacturing business. Tesla needs a story about the glorious future enormous high-margin high-tech business that will shower it with dollars. And that is where econowashing comes in.Musk is repeatedly on record as arguing that the relatively high price of his cars is justified because, since they
I feel very confident in predicting autonomous robotaxis for Tesla next year. Not in all jurisdictions because we won't have regulatory approval everywhere, but I am confident we will have regulatory approval somewhere, literally next yearAnd here is Musk this year:
In January, after Tesla stock shot up nearly 700 percent over the course of a year, Elon Musk explained how shared autonomous vehicles, or SAVs, can help justify the company's valuation.There are already companies that claim to be high-tech in the taxi business, Uber and Lyft. Both initially thought that robotaxis were the key to future profits, but both eventually gave up on the idea. Both have consistently failed to make a profit in the taxi business.
Speaking hypothetically on a fourth-quarter earnings call in January, Musk laid out a scenario in which Tesla reached $50 billion or $60 billion in annual sales of fully self-driving cars that could then be used as robotaxis.
“Used as robotaxis, the utility increases from an average of 12 hours a week to potentially an average of 60 hours a week,” he told investors on the call, according to a Motley Fool transcript. “So that’s, like, roughly a five times increase in utility.”
Source |
So Uber needed to econowash itself. Horan's second part Understanding Uber’s Uncompetitive Costs explains how they did it:
Uber dealt with this Catch-22 with a combination of willful deception and blatant dishonesty, exploiting the natural information asymmetries between individual drivers and a large, unregulated company. Drivers for traditional operators had never needed to understand the true vehicle maintenance and depreciation costs and financial risks they needed to deduct from gross revenue in order to calculate their actual take home pay.Horan has just published the 27th part entitled Despite Staggering Losses, the Uber Propaganda Train Keeps Rolling explaining the current state of the process:
Ongoing claims about higher driver pay that Uber used to attract drivers deliberately misrepresented gross receipts as net take-home pay, and failed to disclose the substantial financial risk its drivers faced given Uber’s freedom to cut their pay or terminate them at will. Uber claimed “[our} driver partners are small business entrepreneurs demonstrating across the country that being a driver is sustainable and profitable…the median income on UberX is more than $90,000/year/driver in New York and more than $74,000/year/driver in San Francisco”[4] even though it had no drivers with earnings anything close to these levels.[5]
...
An external study of actual driver revenue and vehicle expenses in Denver, Houston and Detroit in late 2015, estimated actual net earnings of $10-13/hour, at or below the earnings from the studies of traditional drivers in Seattle, Chicago, Boston and New York and found that Uber was still recruiting drivers with earnings claims that reflected gross revenue, and did not mention expenses or capital risk.
In order to prevent investors and the business press from understanding these results, Uber improperly combined the results of its ongoing, continuing operations with claims about valuation changes in securities of companies operating in markets they had abandoned. To further confuse matters, Uber and Lyft both emphasized a bogus, easily manipulated metric called “Adjusted EBITDA profitability” which does nor measure either profitability or EBITDA.Horan goes on to discuss two recent examples of Uber propaganda, Maureen Dowd's fawning profile of CEO Dara Khosrowshahi, and a Wall Street Journal editorial entitled How Uber and Lyft Can Save Lives based on more of the bogus "academic" research Uber has a track record of producing.
Part Twenty-Seven returns to an important question this series has discussed on multiple occasions—how can a company that has produced eleven years of horrendous financial results and failed to present any semi-coherent argument as to how it could ever achieve sustainable profitability, still be widely seen as a successful and innovative company? One aspect of that was discussed in Part Twenty-Six: the mainstream business press reports of Uber’s financial results are written by people who have difficulty reading financial statements and do not understand concepts such as “profitability.”
The primary driver of the huge gap between Uber’s positive image and its underlying economic reality was its carefully crafted and extremely effective propaganda-based PR program. This series has documented the origins and results of this program in great detail over the years. [3] In the years before objective data about Uber’s terrible economics became widely available, these accounts were designed to lead customers and local officials into believing that Uber was a well-run and innovative company producing enormous benefits that justified its refusal to obey existing laws and regulations and its pursuit of monopoly power.
Uber propaganda is still being produced since the company needs to give potential investors and the general public some reason to believe that a company with its problematic history and awful financials still has a promising future.
Musk's continual hyping of the prospect of robotaxis flies in the face of the history of Uber and its competitors in the non-automated taxi business. Even if robotaxis worked, they'd be a lot more expensive to buy than conventional taxis. They'd eliminate paying the driver, but the Uber driver is lucky to make minimum wage. And they'd incur other costs such as monitoring to rescue the cars from their need to hand-off to a non-existent driver (as has been amply demonstrated by Waymo's Phoenix trial). If Uber can't make a profit and can't figure out how to make a profit even if cars drive themselves, skepticism of Musk's robotaxi hype was clearly justified.
Now, Estimating the energy impact of electric, autonomous taxis: Evidence from a select market by Ashley Nunes et al puts some real detail behind the skepticism. They compare Autonomous Taxis (ATs) with Conventional Taxis (CTs) and Personal Vehicles (PVs):
The findings of our paper are fourfold. First, we illustrate that an AT’s financial proposition, while being more favorable than CTs, remains — contrary to existing discourse — less favorable than PVs. ATs impose a cost of between $1.42 and $2.24 per mile compared to $3.55 and $0.95 per mile incurred when using CTs and PVs respectively. Second, we identify previously overlooked parameters, the most notable being capacity utilization and profit incentive, as significant impediments to achieving cost parity between ATs and PVs. Omission of these parameters lowers AT rider costs to as low as $0.47 per mile. Third, we document that rebound effects do not require cost parity between ATs and PVs. We show that AT introduction produces a net increase in energy consumption and emissions, despite ATs being more expensive than PVs. Fourth we identify and quantify the technological, behavioral and logistical pathways — namely, conformance to AT-specific energy profile, ride-pooling and ‘smart deployment’ — required to achieve net reduction in energy consumption and emissions owing to AT deployment.For the purpose of critquing Tesla's econowashing, it is only necessary to consider Nunes et al's financial analysis. Their model includes the full range of cost factors:
Expenditures considered when estimating consumer cost include vehicle financing, licensing, insurance, maintenance, cleaning, fuel and, for ATs specifically, safety oversight (16,22). Requisite safety oversight is assumed to decrease as AT technology advances. We also take account of operator-envisioned profit expectations and fluctuations in capacity utilization rates that reflect demand heterogeneity.They explain the difference between their analysis and earlier efforts thus:
AT cost estimates also consider heterogeneity in vehicle operational lifespan and annual mileage. As the pro-rating of fixed costs over time impacts the financial proposition of ATs, both factors warrant attention. Mileage heterogeneity considers vehicle recharging requirements that may limit vehicle productivity and subsequently, profitability (23,24). Productivity may be further impeded when vehicle electrification is paired with vehicle automatization owing to increased vehicular weight, sensor load and aerodynamic drag, all of which limit vehicle range (25).
We also consider consumer travel time in terms of hourly wages and thus transform differences in travel time to money units (19,26). Literature suggests that productivity benefits are realized through the re-allocation of time to paid or leisure activities that replace the demands of driving on attention. Envisioned benefits include would-be drivers performing other valued activities (19).
Our financial results admittedly differ from past studies demonstrating cost competitiveness of ATs with PVs (12-14,19). The primary reason for this is that our model accounts for capacity utilization considerations and operator-envisioned profit expectations. Although the inclusion of these factors ‘worsens’ an AT’s financial proposition, their consideration is timely and consistent with commercial fleet operator business practices (20,21).
Conclusion
Why does Elon Musk keep lying about the capabilities, timescales and economics of his self-driving technology? After all this time it isn't plausible that someone as smart as Musk doesn't know that "Full Self-Driving" isn't, that it won't be in 6-24 months, and that even if it worked flawlessly the robotaxi idea won't make customers a profit. In fact, we know he does know it. In Tesla's most recent earnings call Musk said (my emphasis):“We need to make Full Self-Driving work in order for it to be a compelling value proposition,” Musk said, adding that otherwise the consumer is “betting on the future.”And last night he tweeted:
FSD Beta 9.2 is actually not great imoWhy Elon Musk Isn’t Superman by Tim O'Reilly suggests why Musk needs people to be “betting on the future.”:
Elon Musk’s wealth doesn’t come from him hoarding Tesla’s extractive profits, like a robber baron of old. For most of its existence, Tesla had no profits at all. It became profitable only last year. But even in 2020, Tesla’s profits of $721 million on $31.5 billion in revenue were small—only slightly more than 2% of sales, a bit less than those of the average grocery chain, the least profitable major industry segment in America.O'Reilly should have noted where 56% of those profits came from:
Tesla’s revenue and bottom line were helped by the sale of $401 million in emissions credits in the fourth quarter to other automakers who need them to meet regulatory standards.He continues:
Why is Musk so rich? The answer tells us something profound about our economy: he is wealthy because people are betting on him.The insane 1,396 P/E, and the only slightly less insane current 355 P/E depend upon investors believing a story. So far this year Musk has lost 22% of his peak paper wealth. If Tesla had dropped to Google's P/E Musk would have lost 93% of his peak paper wealth in 7 months. He would be only 7% as rich as he once thought he was. Preventing that happening by telling pausible stories of future technologies is important to Musk.
...
despite their enormous profits and huge cash hoards, Apple, Google, and Facebook have [P/E] ratios much lower than you might expect: about 30 for Apple, 34 for Google, and 28 for Facebook. Tesla at the moment of Elon Musk’s peak wealth? 1,396.
Update 24th December 2021: Brian McFadden has figured it out!:
Referring to this New York Times story |
The First Step Toward Protecting Everyone Else From Teslas shows that David Zipper understands the problem with "Full Self-Driving Beta":
ReplyDelete"But there’s another aspect of NHTSA’s investigation that looks promising: It focuses on a specific group of people who were outside the Tesla during a collision. If an EMT has pulled over to check on people involved in a crash, there isn’t much she can do to protect herself from a Model Y bearing down on her."
Jemima Kelly's response to the "Tesla Bot" schedule of "a prototype some time next year":
ReplyDelete"We will probably also by that stage have the Tesla Semi (deposits have been taken for that one since 2017 but is still nowhere to be seen); the second-generation Roadster (likewise, with pre-orders also beginning in 2017); and the robotaxi (a million of which were promised by the end of 2020)."
Laura Dobberstein reports that Chinese auto-maker accused of altering data after fatal autonomous car accident:
ReplyDelete"On August 12, 31 year old Lin Wenqin was using the driver assistance feature on his Nio ES8 when he was involved in a fatal car crash. Chinese state-owned media Global Times reported Lin's car had collided with a construction vehicle on the Shenhai Expressway.
On August 17th, Lin's family reported the data from the crashed vehicle had been tampered with by representatives of the car manufacturer, Nio. The family gave police audio and video recordings of the EV company's employees admitting to having contact with the seven-seater SUV after the crash."
Note the autonowash in the headline:
"The Navigate on Pilot system for the model lands in the level-2 autonomous vehicle category, meaning the driver is ultimately in charge and should be alert."
Cheers for sharing, David. Glad you found some value in the video!
ReplyDeleteAndrew Kersley documents another case where the autonomy hype was mugged by reality in The slow collapse of Amazon’s drone delivery dream:
ReplyDelete"Well over 100 employees at Amazon Prime Air have lost their jobs and dozens of other roles are moving to other projects abroad as the company shutters part of its operation in the UK, WIRED understands. Insiders claim the future of the UK operation, which launched in 2016 to help pioneer Amazon’s global drone delivery efforts, is now uncertain."
Paul MacInnes reports that Toyota pauses Paralympics self-driving buses after one hits visually impaired athlete:
ReplyDelete"Toyota has apologised for the “overconfidence” of a self-driving bus after it ran over a Paralympic judoka in the athletes’ village and said it would temporarily suspend the service.
The Japanese athlete, Aramitsu Kitazono, will be unable to compete in his 81kg category this weekend after being left with cuts and bruises following the impact with the “e-Palette” vehicle. His injuries prompted a personal intervention from the president of Toyota, Akio Toyoda."
And the hits keep coming! Lora Kolodny reports that A Tesla Model 3 hit a parked police car in Orlando, driver said she was ‘in Autopilot’:
ReplyDelete"The driver of a 2019 Tesla Model 3 told officers she was using Autopilot, Tesla’s advanced driver assistance system, when she collided with a police car and a Mercedes SUV Saturday morning around 5 a.m. ET in Orlando, Florida."
Jonathan Gitlin's Tesla must tell NHTSA how Autopilot sees emergency vehicles documents the regulatory vise closing in:
ReplyDelete"The NHTSA sent Tesla the 11-page letter asking for detailed information on how Autopilot recognizes and reacts to emergency vehicles. The company must respond by October 22 unless it asks for an extension, and the AP says Tesla could be fined $114 million if it does not cooperate.
Specifically, the agency wants to know how the system detects "a crash scene, including flashing lights, road flares, reflectorized vests worn by responders, and vehicles parked on the road." Additionally, Tesla must tell NHTSA how Autopilot works in low-light conditions and what happens if the system detects an emergency."
My guess is Tesla has a number of problems here:
1) Autopilot is an AI system, so Tesla doesn't actually know the answer except "we trained it on data and it seems to work".
2) Autopilot, at least in current Model 3 and Model Y, is solely camera-based, so it doesn't work in low-light conditions such as the most recent 5AM crash.
3) When Autopilot detects an emergency it can't handle, it hands-off to the "attentive" driver. Which we know doesn't work.
Audrey LaForest's NHTSA seeking driver-assist data from automakers in Tesla Autopilot probe spells more trouble for Tesla's Autopilot:
ReplyDelete"For each automaker, the agency is seeking the number of vehicles equipped with Level 2 systems that have been manufactured for sale, lease or operation in the U.S. as well as the cumulative mileage covered with the systems engaged and a log of the most recent updates to the systems.
The agency also is requesting all consumer complaints, field reports, crash reports and lawsuits that may relate to the driver-assist systems.
Automakers must describe the types of roads and driving conditions where the systems are intended to be used, and the methods and technologies used to prevent usage outside the operational design domain specified to customers. In addition, automakers must provide an overview of their approach to enforce driver engagement or attentiveness while the systems are in use."
Elon Musk is starting to lose his hold on the cult members, as Matt McFarland reports in Some Tesla owners are losing trust in Elon Musk's promises of 'full self-driving':
ReplyDelete"Frustrated Tesla owners continue to wait for "full self-driving," an expensive and long-delayed software feature that isn't even guaranteed to help their cars' resale values. Some of the company's earliest backers of the "full self-driving" option are even beginning to lose faith in the promise of ever enjoying a truly autonomous Tesla.
...
Owners with "full self-driving" today don't get the big autonomous vision Musk has long promoted, but instead a set of features that drivers with only Autopilot don't have. They include automated lane changes on highways and "Navigate on Autopilot," which guides a Tesla from highway on-ramp to off-ramp. There's also a parking assist feature as well as "smart summon," in which the car can slowly drive through a parking lot to pick up passengers, and a feature to identify stop signs and traffic lights.
For years, Musk made grandiose claims about how soon Tesla's cars will drive themselves. Musk and Tesla have fallen short of these deadlines repeatedly, but he's continued to make optimistic predictions."
Irony alert! Tesla will open controversial FSD beta software to owners with a good driving record by Kristen Korosec explains that in order to use software that means you don't need to drive, you have to be a good enough driver not to need software that means you don't need to drive:
ReplyDelete"Owners who have paid for FSD, which currently costs $10,000, will be offered access to the beta software through a “beta request button.” Drivers who select the beta software will be asked for permission to access their driving behavior using Tesla’s insurance calculator, Musk wrote in a tweet.
“If driving behavior is good for seven days, beta access will be granted,” Musk wrote."
Today's story of Tesla's Autopilot hype enabling stupid and dangerous behavior by the Musk cult members is Clive Thompson's Police use Tesla's autopilot to stop the car after drunk driver passes out:
ReplyDelete"A woman was driving drunk in her Tesla and passed out; the car, under its autopilot mode, kept going down the highway. The woman's husband was apparently driving behind her and called the police. They showed up and stopped the Tesla by parking in the highway. The autopilot detected the possible collision and, this time, worked perfectly, slowing the Tesla to a halt."
Why would anyone think that because they had a car whose Level 2 system required "a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment." that it was OK to drive so drunk you passed out?
Why would Tesla sell a Level 2 system that required "a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment." when the system was unable to detect that the driver had passed out drunk?
Another example of the Musk reality distortion field in Audrey Carleton's A Tesla Big Battery Is Getting Sued Over Power Grid Failures In Australia:
ReplyDelete"On Wednesday, the Australian Energy Regulator (AER), the body that oversees the country’s wholesale electricity and gas markets, announced it had filed a federal lawsuit against the Hornsdale Power Reserve (HPR)—the energy storage system that owns the Tesla battery—for failing to provide “frequency control ancillary services” numerous times over the course of four months in the summer and fall of 2019. In other words, the battery was supposed to supply grid backup when a primary power source, like a coal plant, fails."
They were being paid for being able to provide power at a moment's notice, but when the moment arrived they couldn't.
Andrew J. Hawkins reports on #13 in Tesla sued by Texas cops after a Model X on Autopilot slammed into five officers:
ReplyDelete"A group of Texas law enforcement officials are suing Tesla after a Model X with Autopilot engaged crashed into five police officers.
...
The crash took place February 27, 2021, in Splendora, a small town in Montgomery County in the eastern part of the state. According to the lawsuit, the Model X SUV crashed into several police officers while they were engaged in a traffic stop on the Eastex Freeway in Texas. “All were badly injured,” the lawsuit says."
How many more Teslas on Autopilot slamming into emergency crews will it take before the government takes action?
Nick Carey's Dutch forensic lab says it has decoded Tesla's driving data reveals that:
ReplyDelete"The Dutch government's forensic lab said on Thursday it had decrypted electric carmaker Tesla Inc's closely guarded driving data-storage system, uncovering a wealth of information that could be used to investigate serious accidents.
It was already known that Tesla cars store data from accidents, but the Netherlands Forensic Institute (NFI) said it had discovered far more data than investigators had previously been aware of.
The NFI said the decrypted data showed Tesla vehicles store information about the operation of its driver assistance system, known as Autopilot. The vehicles also record speed, accelerator pedal position, steering wheel angle and brake usage, and depending on how the vehicle is used, that data can be stored for over a year."
And:
"The Dutch lab said rather than seek the data from Tesla, it had "reverse engineered" data logs - a process where software is deconstructed to extract information - present in Tesla vehicles "in order to objectively investigate them."
The NFI investigated a collision involving a Tesla driver using Autopilot and a car in front of it that suddenly braked hard.
The investigation showed the Tesla driver reacted within the expected response time to a warning to resume control of the car, but the collision occurred because the Tesla was following the other vehicle too closely in busy traffic."
I'm shocked, shocked that an Elon Musk company would be less than honest with regulators.
Mahmood Hikmet's 2:19 video Tesla FSD Beta Danger Compilation juxtaposes video of Elon Musk hyping "Full Self Driving" as Level 5 autonomy with video of actual "FSD" disengagements, showing that it isn't even a good Level 2 system.
ReplyDeleteMatt McFarland's We tried Tesla's 'full self-driving.' Here's what happened is scary:
ReplyDelete"I'd spent my morning so far in the backseat of the Model 3 using "full self-driving," the system that Tesla says will change the world by enabling safe and reliable autonomous vehicles. I'd watched the software nearly crash into a construction site, try to turn into a stopped truck and attempt to drive down the wrong side of the road. Angry drivers blared their horns as the system hesitated, sometimes right in the middle of an intersection."
Elon Musk's response was, as usual, denial:
"I suspect that article was written before the drive even took place,"
The cult members response was that "Full Self-Driving" wasn't working well because the driver in the front seat was "inexperienced".
Tim Stevens isn't impressed with Autopilot. In 2021 Tesla Model Y review: Nearly great, critically flawed he writes:
ReplyDelete"I can't conclusively say that it's because of the missing radar, but I can say that our Model Y is bad at detecting obstructions ahead. Really, really bad. The big issue is false positives, a problem that has become known as "phantom braking" among Tesla owners. Basically, the car often gets confused and thinks there's an obstacle ahead and engages the automatic emergency braking system. You get an instant, unwanted and often strong application of the brakes. This is not a problem unique to Teslas. I've experienced it on other cars, but very, very rarely. On our Model Y this happens constantly, at least once an hour and sometimes much more often than that. In a single hour of driving I caught five phantom braking incidents on camera, two hard enough to sound the automatic emergency braking chime."
In The Deadly Myth That Human Error Causes Most Car Crashes, David Zipper points out that:
ReplyDelete"More than 20,000 people died on American roadways from January to June, the highest total for the first half of any year since 2006. U.S. road fatalities have risen by more than 10 percent over the past decade, even as they have fallen across most of the developed world. In the European Union, whose population is one-third larger than America’s, traffic deaths dropped by 36 percent between 2010 and 2020, to 18,800. That downward trend is no accident: European regulators have pushed carmakers to build vehicles that are safer for pedestrians and cyclists, and governments regularly adjust road designs after a crash to reduce the likelihood of recurrence."
The myth that 94% of crashes are solely the fault of the US driver, car manufacturers and road designers escape responsibility. This is especially convenient for autonowashing:
"the idea that human error causes nearly all crashes is a useful talking point for the makers of autonomous-vehicle technology, which supposedly will prevent such mistakes. Companies including General Motors, Google, and the start-up Aurora have touted the 94 percent statistic in promotional materials, press statements, and even SEC filings. But, as the Carnegie Mellon University engineering professor Phil Koopman has pointed out, autonomous systems will make their own errors on the road. He does not expect AVs to reduce crashes by more than 50 percent, even in a best-case scenario. And an all-autonomous driving future is still at least decades away, suggesting that AVs will not reverse the growing death toll on American roads for many years to come—if they ever do."
Inside Tesla as Elon Musk Pushed an Unflinching Vision for Self-Driving Cars by Cade Metz and Neal E. Boudette is a long article that provides more detail on Musk's irresponsibility. For example:
ReplyDelete"In addition, some who have long worked on autonomous vehicles for other companies — as well as seven former members of the Autopilot team — have questioned Tesla’s practice of constant modifications to Autopilot and F.S.D., pushed out to drivers through software updates, saying it can be hazardous because buyers are never quite sure what the system can and cannot do.
Hardware choices have also raised safety questions. Within Tesla, some argued for pairing cameras with radar and other sensors that worked better in heavy rain and snow, bright sunshine and other difficult conditions. For several years, Autopilot incorporated radar, and for a time Tesla worked on developing its own radar technology. But three people who worked on the project said Mr. Musk had repeatedly told members of the Autopilot team that humans could drive with only two eyes and that this meant cars should be able to drive with cameras alone."
More evidence of the risks Tesla is subjecting innocent bystanders to in this video of "FSD" in downtown San Jose, For example: asking for the driver to take over then not letting him, trying to drive down light rail tracks, trying to hit pedestrian who had walk light (~8min in).
ReplyDeleteThe WaPo finally realizes that Musk is risking innocent lives in Reed Albergotti and Faiz Siddiqui's Tesla test drivers believe they’re on a mission to make driving safer for everyone. Skeptics say they’re a safety hazard:
ReplyDelete"Marc Hoag, a self-described Tesla fanboy and a shareholder of its stock, waited for a year and a half to get the software. But once he tried it, he was disappointed.
“It’s still so impossibly bad,” he said.
Hoag said the driving experience is worse in person than it looks on videos he’s posted to YouTube, which show the car taking turns too wide, speeding into curves and mistaking a crosswalk sign for a pedestrian — while otherwise acting apprehensively at intersections alongside other traffic. Its fidgety wheel and the indecisive braking make for an unpleasant ride, and its unpredictable nature make it scary, he said."
Emma Roth reports that, in the image of the CEO, Tesla’s ‘Full Self-Driving’ beta has an ‘assertive’ driving mode that ‘may perform rolling stops’:
ReplyDelete"In the description beneath the “Assertive” option, Tesla notes the vehicle will “have a smaller follow distance” and “perform more frequent speed lane changes.” The vehicle will also “not exit passing lanes” and “may perform rolling stops,” and it’s not entirely clear whether this means cars won’t come to a full stop at stop signs."
Thank you, California State Senator Lena Gonzalez! As Russ Mitchell reports in DMV ‘revisiting’ its approach to regulating Tesla’s public self-driving test, she has understood the dangers to the public of Tesla's fan-bois using the laughably named "Full Self Driving" option on public streets:
ReplyDelete"Concerned about public safety, Gonzalez asked the DMV in December for its take on Tesla’s Full Self-Driving beta program, under which Tesla owners supervise the operation of cars programmed to autonomously navigate highways, city streets and neighborhood roads, stopping at traffic lights and stop signs as well as making left and right turns into traffic.
Those are the same features being tested by other robot car developers that report crashes and disengagements to the DMV, a group that includes Waymo, Cruise, Argo and Zoox. Although their cars occasionally crash, there are few YouTube videos that show them behaving dangerously."
And it worked:
"For years, Tesla has tested autonomous vehicle technology on public roads without reporting crashes and system failures to the California Department of Motor Vehicles, as other robot car developers are required to do under DMV regulations.
But confronted with dozens of viral videos showing Tesla’s Full Self-Driving beta technology driving the car into dangerous situations, and a letter of concern from a key state legislator, the DMV now says it’s reviewing Tesla’s behavior and reassessing its own policies.
The agency informed Tesla on Jan. 5 that it is “revisiting” its opinion that the company’s test program doesn’t fall under the department’s autonomous vehicle regulations because it requires a human driver.
“Recent software updates, videos showing dangerous use of that technology, open investigations by the National Highway Traffic Safety Administration, and the opinions of other experts in this space” prompted the reevaluation"
Jonathan M. Gitlin reports that Manslaughter charges follow Tesla driver’s Autopilot red light run:
ReplyDelete"Prosecutors in California have charged a Tesla driver with two counts of manslaughter as a result of a fatal crash in December 2019. According to the Associated Press, the National Highway Traffic Safety Administration confirmed that the Autopilot driver-assistance feature was active at the time of the crash. That makes this case notable in that these are the first felony charges to result from a fatal crash involving a partially automated driving system.
The fatal crash took place in Gardena, California, on December 29, 2019. According to reports, the Tesla Model S owned by Kevin Riad exited I-91, failed to stop at a red light, and then collided with a Honda Civic, killing both of that car's occupants, Gilberto Alcazar Lopez and Maria Guadalupe Nieves-Lopez."
onathan M. Gitlin reports that IIHS will rate driver assists like Autopilot and Super Cruise for safety:
ReplyDelete"On Thursday, the Insurance Institute for Highway Safety announced that it is creating a rating system for hands-free advanced driver-assistance systems like Tesla's Autopilot and General Motors' Super Cruise. Later this year IIHS will issue its first set of ratings, with grading levels of good, acceptable, marginal, or poor. Having a good driver-monitoring system will be vital to getting a good grade.
And the institute is not alone. Also on Thursday, Consumer Reports revealed that it, too, will consider the safety of such tech features, adding points if there's a good driver-monitoring system. CR says that so far, only Super Cruise and Ford's BlueCruise systems are safe enough to get those extra points. Meanwhile, from model year 2024, CR will start subtracting points for cars that offer partial automation without proper driver monitoring."
This will be a problem for Tesla, so expect a vocal reaction from the fan-bois.
In Fatal Tesla Model S Crash in California Prompts Federal Probe, Keith Laing reports that:
ReplyDelete"The US National Highway Traffic Safety Administration is investigating a fatal crash involving a 2022 Tesla Model S that may have had its automated driving system activated.
The accident in Newport Beach, California, killed three people earlier this month, according to details provided Wednesday by NHTSA. The electric vehicle hit a curb and slammed into construction equipment, leaving the car totaled, the Orange County Register reported.
The collision is the 42nd included in NHTSA’s Special Crash Investigation of advanced driver assistance systems like Tesla Inc.’s Autopilot.
...
Tesla vehicles have been involved in all but seven of the advanced driver assistance systems that have been added to NHTSA’s investigation."
So there are currently 35 Tesla crashes under investigation. This technology is a clear and present danger to innocent bystanders.
Andrew J. Hawkins' Two new fatal Tesla crashes are being examined by US investigators starts:
ReplyDelete"Two fatal Tesla crashes are being examined by investigators at the National Highway Traffic Safety Administration. Reuters reported that NHTSA opened a special investigation into a recent fatal crash in California, in which a Tesla driver killed a pedestrian. And an agency spokesperson confirmed to The Verge that a crash that took place on July 6th in Florida is also under examination."
Russ Mitchell reports that Musk said not one self-driving Tesla had ever crashed. By then, regulators already knew of 8:
ReplyDelete"Santa Monica investment manager and vocal Tesla booster Ross Gerber was among the allies who sprang to his defense.
“There has not been one accident or injury since FSD beta launch,” he tweeted in January. “Not one. Not a single one.”
To which Musk responded with a single word: “Correct.”
In fact, by that time dozens of drivers had already filed safety complaints with the National Highway Traffic Safety Administration over incidents involving Full Self-Driving — and at least eight of them involved crashes. The complaints are in the public domain, in a database on the NHTSA website."
The title of Max Chafkin's Even After $100 Billion, Self-Driving Cars Are Going Nowhere sums up the must-read article based largely on input from self-driving pioneer Anthony Levandowski:
ReplyDelete"Levandowski says his skepticism of the industry started around 2018. It was a little more than a year after Elon Musk unveiled a demo of a Tesla driving itself to the tune of Paint It Black. Levandowski checked the official road-test data that Tesla submitted to California regulators. The figures showed that, in that time, the number of autonomous miles Tesla had driven on public roads in the state totaled—wait for it—zero. (Tesla hasn’t reported any autonomous miles traveled in California since 2019. The company didn’t respond to a request for comment.) Although Levandowski says he admires Tesla, is impressed by its driver-assistance technology, and believes it may one day produce a truly self-driving car, he says the lack of progress by Musk and his peers forced him to question the point of his own years in the field. “Why are we driving around, testing technology and creating additional risks, without actually delivering anything of value?” he asks.
While Tesla has argued that its current system represents a working prototype, Musk has continued to blur the lines between demos and reality."
San Francisco is now infested with self-driving cars lacking the essential fully trained self-driving car driver. When they come upon situations outside their training data, the result is chaos. First responder are the ones who have to deal with unusual situations, such as a house blowing up when a meth lab malfunctions, and as Joe Eskanazi reports in ‘No! You stay!’ Cops, firefighters bewildered as driverless cars behave badly they are getting even more annoyed than the rest of SF's residents:
ReplyDelete"The self-driving future has arrived in San Francisco. And, increasingly and all too often, it looks like a confounded cop, road flare in hand, commanding a wayward autonomous vehicle as if it were a misbehaving, two-ton puppy.
“No!” shouts the cop, as captured in his body-worn camera footage. “You stay!”
The incident occurred on Feb. 9, during one of San Francisco’s more memorable recent emergencies: A dollar-store Walter White apparently lost control of his Sunset District garage dope factory, resulting in a lethal explosion and fire. The normally sedate neighborhood in the vicinity of 21st Avenue and Noriega Street was instantly transformed into both a disaster scene and a crime scene.
And, to make it a truly San Francisco scene, a driverless Waymo vehicle subsequently proceeded to meander into the middle of things, like an autonomous Mr. Magoo."
Go read the whole thing, it'll be time well spent.
The AP reports that Cruise agrees to cut fleet of San Francisco robotaxis in half after crashes:
ReplyDelete"General Motors’ Cruise autonomous vehicle unit has agreed to cut its fleet of San Francisco robotaxis in half as authorities investigate two recent crashes in the city.
The state department of motor vehicles asked for the reduction after a Cruise vehicle without a human driver collided with an unspecified emergency vehicle on Thursday.
“The DMV is investigating recent concerning incidents involving Cruise vehicles in San Francisco,” the DMV said on Saturday in a statement to the Associated Press. “Cruise has agreed to a 50% reduction and will have no more than 50 driverless vehicles in operation during the day and 150 driverless vehicles in operation at night.”
The development comes just over a week after California regulators allowed Cruise and the Google spinoff Waymo to operate autonomous robotaxis throughout San Francisco at all hours, despite safety worries spurred by recurring problems with unexpected stops and other erratic behavior.
...
On Thursday around 10pm, the Cruise vehicle had a green light, entered an intersection, and was hit by the emergency vehicle responding to a call, the San Francisco Chronicle reported, based on tweets from Cruise.
The robotaxi was carrying a passenger, who was taken by ambulance to a hospital with injuries that were not severe, Cruise told the newspaper.
Also on Thursday night, a Cruise car without a passenger collided with another vehicle in San Francisco, the newspaper reported."