Tuesday, January 9, 2024

Autonomous Vehicles: Trough of Disillusionment

Jeremykemp CC BY-SA 3.0, Link
This is the famous Gartner hype cycle and it certianly appears that autonomous vehicles are currently in the Trough of Disillusionment. Whether they will eventually soar up to the Plateau of Productivity is unknown, but for now it is clear even to practitioners that the hype bubble they have been riding for years has burst.

Below the fold I try to catch up with the flood of reporting on the autonomous vehicle winter triggered by the bursting of the bubble.

Cruise

The most obvious symptom is the implosion of General Motors' Cruise robotaxi division. Brad Templeton worked on autonomy at Waymo and is an enthusiast for the technology. But in Robocar 2023 In Review: The Fall Of Cruise he acknowledges that things aren't going well:
Looking at 2023 in review for self-driving cars, one story stands out as the clear top story of the year, namely the fall of General Motors’ “Cruise” robotaxi division, which led to a pause of operations for Cruise over the whole USA, a ban on operations in California, the resignation of the founder/CEO and much more. That was actually the most prominent component of the other big story of the year, namely the battle between San Francisco and both Cruise and Waymo.

Even this serious mistake should not have and perhaps would not have driven Cruise to its fate. The bigger mistake was they did not want to talk about it. They were eager to show the car’s recorded video of events to the press, and to officials, but only over Zoom. They did not mention the dragging, even when asked about it. I did not ask to see the part of the video after the crash, but others who did see it say it ended before the dragging. The DMV states they were not shown the dragging, and Cruise did not tell them about it, including in a letter they sent shortly after the events. Cruise insists they did show the full video to the DMV, and the DMV insists otherwise, but there is little doubt that they were not open about what was obviously the most important part of the chain of events when it came to understanding the role of the robotaxi in the calamity. Cruise was very eager to show that the initial crash was not their fault, but didn’t want to talk at all about their own serious mistake.
Laura Dobberstein reports on part of the aftermath in GM's Cruise sheds nine execs in the name of safety and integrity:
GM’s self-driving taxi outfit, Cruise, has dismissed nine execs – including its chief operating officer – after staff withheld information regarding an incident in which a woman was injured by one of the firm's robotaxis.

"Today, following an initial analysis of the October 2 incident and Cruise's response to it, nine individuals departed Cruise. These include key leaders from Legal, Government Affairs, and Commercial Operations, as well as Safety and Systems," a Cruise spokesperson told The Register.

"As a company, we are committed to full transparency and are focused on rebuilding trust and operating with the highest standards when it comes to safety, integrity, and accountability and believe that new leadership is necessary to achieve these goals," the spokesperson added.
It isn't just the executives who are feeling the pain, as Hayden Field and Michael Wayland report in GM's Cruise laying off 900 employees, or 24% of its workforce: Read the memo here:
Cruise on Thursday announced internally that it will lay off 900 employees, or 24% of its workforce, the company confirmed to CNBC.

The layoffs, which primarily affected commercial operations and related corporate functions, are the latest turmoil for the robotaxi startup and come one day after Cruise dismissed nine “key leaders” for the company’s response to an Oct. 2 accident in which a pedestrian was dragged 20 feet by a Cruise self-driving car after being struck by another vehicle.

The company had 3,800 employees before Thursday’s cuts, which also follow a round of contractor layoffs at Cruise last month.
Templeton's main argument is that Cruise mishandled the PR problem resulting from their vehicle dragging an injured pedestrian. They certainly did, in particular by apparently concealing information from the Dept. of Motor Vehicles. But the PR problem they faced is generic to autonomous vehicles. They are marketed as being safer than human drivers and, statistically, this may even be true. Waymo certainly makes a plausible claim. But although "safer" implies it does not mean "safe", so serious crashes will inevitably occur.

When they do the company's PR department is in a no-win situation. The reporters who call asking for the company's reaction know that the company has a vast amount of detailed information about what happened, logs and video. But the PR people don't, and even if they did they don't have the skills to interpret it. Given the nature of the AI driving the vehicle, it will take considerable time for the engineers to find the bug that was the root cause.

The honest thing for the PR people to say is "we don't have the details, we'll get back to you when we do". Reporters are likely to hear this as buying time for a cover-up.

The wrong thing for them to do is to give the reporters what little they know, and spin it in ways that minimize the company's fault. Later, when the full details emerge, they will have been shown to have been covering up the worst. Next time, even if they are honest, they won't be believed.

The PR problem is even worse because it is fundamentally asymmetric. The marketing pitch is that, statistically, autonomous vehicles cause less accidents than humans. But the public will never know about accidents that would have happened were a human driving, but were averted by the AI driver. They will know about accidents that did happen because the AI driver did something that a human would not have, as with the Uber and Cruise incidents. When set against the expectation of airline-level safety, this is an insuperable problem.

In Driverless cars were the future but now the truth is out: they’re on the road to nowhere, Christian Wolmar who has a book about the autonomous winter entitled Driverless Cars: On a Road to Nowhere? points out that Cruise isn't the first robotaxi company felled by an accident. That would be Uber:
Right from the start, the hype far outpaced the technological advances. In 2010, at the Shanghai Expo, General Motors had produced a video showing a driverless car taking a pregnant woman to hospital at breakneck speed and, as the commentary assured the viewers, safely. It was precisely the promise of greater safety, cutting the terrible worldwide annual roads death toll of 1.25m, that the sponsors of driverless vehicles dangled in front of the public.

And that is now proving their undoing. First to go was Uber after an accident in which one of its self-driving cars killed Elaine Herzberg in Phoenix, Arizona. The car was in autonomous mode, and its “operator” was accused of watching a TV show, meaning they did not notice when the car hit Herzberg, who had confused its computers by stepping on to the highway pushing a bike carrying bags on its handlebars. Fatally, the computer could not interpret this confusing array of objects.

Until then, Uber’s business model had been predicated on the idea that within a few years it would dispense with drivers and provide a fleet of robotaxis. That plan died with Herzberg, and Uber soon pulled out of all its driverless taxi trials.
In her review of Wolmar's book Yves Smith makes two good points:
Wolmar describes how this Brave New World has stalled out. The big reason is that the world is too complicated. or to put it in Taleb-like terms, there are way too many tail events to get them into training sets for AI in cars to learn about them. The other issue, which Wolmar does not make explicit, is that the public does not appear willing to accept the sort of slip-shod tech standards of buggy consumer software. The airline industry, which is very heavy regulated, has an impeccable safety record, and citizens appear to expect something closer to that…particularly citizens who don’t own or have investments in self-driving cars and never consented to their risks.
It isn't just Cruise that is figuring out that Robotaxi Economics don't work. Rita Liao's Facing roadblocks, China’s robotaxi darlings apply the brakes reports that they don't work in China either:
Despite years of hype and progress in self-driving technologies, the widespread availability of robotaxis remains a distant reality. That’s due to a confluence of challenges, including safety, regulations and costs.

The last factor, in particular, is what has pushed China’s robotaxi pioneers toward more opportunistic endeavors. To become profitable, robotaxis need to eventually remove human operators. Though China recently clarified rules around the need for human supervision, taxis without a driver behind the wheel are allowed only in restricted areas at present. To attract customers, robotaxi services offer deep discounts on their paid rides.

Once the subsidies are gone and initial user curiosity wanes, who’s willing to pay the same amount as taxi fares for a few fixed routes?

Struggling to address that question, China’s robotaxi startups have woken up to the money-burning reality of their business.
So they are pivoting to a viable product:
One logical path to monetize self-driving technology is to sell a less robust version of the technology, namely, advanced driver assistance systems (ADAS) that still require human intervention.

Deeproute, which is backed by Alibaba, significantly scaled back its robotaxi operations this year and plunged right into supplying ADAS to automakers. Its production-ready solution, which includes its smart driving software and lidar-powered hardware, is sold competitively at $2,000. Similarly, Baidu is “downgrading the tech stacks” to find paying customers on its way up what it calls the “Mount Everest of self-driving.”

“The experience and insight gleaned from deploying our solutions in [mass-produced] vehicles is being fed into our self-driving technology, giving us a unique moat around security and data,” a Baidu spokesperson said.
Not a good year for the robotaxi concept, the thing that was supposed to distinguish Tesla's cars from everyone else's because they would earn their owners money while they slept.

Tesla

As usual, when it comes to self-driving Tesla's story is worse than almost everyone else's. Elon Musk famously claimed that Tesla is worth zero without Full Self Driving. But although this is typical Musk BS, but unlike some other utterances it contains a kernel of truth. Tesla is valued as a technology company not a car company. Thus it is critical for Telsa that its technology be viewed as better than those of other car companies; anything that suggests it is limited or inadequate is a big problem not just for the company but also for Musk's personal wealth.

I believe this is why Tesla hasn't implemented effective driver monitoring nor geo-fenced their systems. And why members of the Musk cult are so keen on defeating the driver monitoring system with weights on the steering wheel and smiley-face stickers on the camera. Depending upon what you believe, the technology is either groundbreaking autonomy or a below average Level-2 driver assistance system (Mercedes has fielded a Level 3 system). The congnitive dissonance between the pronouncements of the cult leader and the reality of continually being "nagged" about the limits of the technology is too hard for the cult members to take.

The Washington Post team of Trisha Thadani, Rachel Lerman, Imogen Piper, Faiz Siddiqui and Irfan Uraizee has been delivering outstanding reporting on Tesla's technology. They started on Octoober 6th with The final 11 seconds of a fatal Tesla Autopilot crash, in which a Tesla driver enabled Autopilot in conditions for which it was not designed, and set the speed to 14mph above the limit. Then on December 10th they followed with Tesla drivers run Autopilot where it’s not intended — with deadly consequences and Why Tesla Autopilot shouldn’t be used in as many places as you think.

They recount a 2019 crash in Florida:
A Tesla driving on Autopilot crashed through a T intersection at about 70 mph and flung the young couple into the air, killing Benavides Leon and gravely injuring Angulo. In police body-camera footage obtained by The Washington Post, the shaken driver says he was “driving on cruise” and took his eyes off the road when he dropped his phone.

But the 2019 crash reveals a problem deeper than driver inattention. It occurred on a rural road where Tesla’s Autopilot technology was not designed to be used. Dash-cam footage captured by the Tesla and obtained exclusively by The Post shows the car blowing through a stop sign, a blinking light and five yellow signs warning that the road ends and drivers must turn left or right.
Note that, just like the repeated crashes into emergency vehicles, the victims did not volunteer to debug Tesla's software. And also that the Autopilot system was driving 15mph above the speed limit on a road it wasn't designed for, just like the 2019 Banner crash into a semi-trailer. As I wrote about that crash:
It is typical of Tesla's disdain for the law that, although their cars have GPS and can therefore know the speed limit, they didn't bother to program Autopilot to obey the law.
...
Again, Tesla's disdain for the safety of their customers, not to mention other road users, meant that despite the car knowing which road it was on and thus whether it was a road that Autopilot should not be activated on, it allowed Banner to enable it.
Federal regulators have known there was a problem for more than seven years, but they haven't taken effective action:
Nor have federal regulators taken action. After the 2016 crash, which killed Tesla driver Joshua Brown, the National Transportation Safety Board (NTSB) called for limits on where driver-assistance technology could be activated. But as a purely investigative agency, the NTSB has no regulatory power over Tesla. Its peer agency, the National Highway Traffic Safety Administration (NHTSA), which is part of the Department of Transportation, has the authority to establish enforceable auto safety standards — but its failure to act has given rise to an unusual and increasingly tense rift between the two agencies.
The reason may be that car manufacturers "self-certify" conformance with safety standards:
The string of Autopilot crashes reveals the consequences of allowing a rapidly evolving technology to operate on the nation’s roadways without significant government oversight, experts say. While NHTSA has several ongoing investigations into the company and specific crashes, critics argue the agency’s approach is too reactive and has allowed a flawed technology to put Tesla drivers — and those around them — at risk.

The approach contrasts with federal regulation of planes and railroads, where crashes involving new technology or equipment — such as recurring issues with Boeing’s 737 Max — have resulted in sweeping action by agencies or Congress to ground planes or mandate new safety systems. Unlike planes, which are certified for airworthiness through a process called “type certification,” passenger car models are not prescreened, but are subject to a set of regulations called Federal Motor Vehicle Safety Standards, which manufacturers face the burden to meet.
And Tesla's self-certification is self-serving.

Self-certification would work well if the penalty for false certification was severe, but the NTHSA has declined to impose any penalty for Tesla's manifestly inadequate system. It seems that the team's reporting finally drove the NHTSA to do something about the long-standing problems of Autopilot. Reuters reported that Tesla recalls more than 2m vehicles in US over Autopilot system:
Tesla is recalling just over 2m vehicles in the United States fitted with its Autopilot advanced driver-assistance system to install new safeguards, after a safety regulator said the system was open to “foreseeable misuse”.

The National Highway Traffic Safety Administration (NHTSA) has been investigating the electric automaker led by the billionaire Elon Musk for more than two years over whether Tesla vehicles adequately ensure that drivers pay attention when using the driver assistance system.

Tesla said in the recall filing that Autopilot’s software system controls “may not be sufficient to prevent driver misuse” and could increase the risk of a crash.
...
Separately, since 2016, NHTSA has opened more than three dozen Tesla special crash investigations in cases where driver systems such as Autopilot were suspected of being used, with 23 crash deaths reported to date.

NHTSA said there might be an increased risk of a crash in situations when the system is engaged but the driver does not maintain responsibility for vehicle operation and is unprepared to intervene or fails to recognize when it is canceled or not.
The obfuscation is extraordinary — "forseeable misuse", "may not be sufficient" and "could increase". There is no "foreseeable", "may" nor "could"; multiple people have already died because the system was abused. When Tesla is sued about these deaths, their defense is that the system was abused! I believe the system is specifically designed to allow abuse, because preventing abuse would puncture the hype bubble.

Fortunately from the NTHSA's point of view the recall is pure kabuki, posing no risk from Musk's attack-dog lawyers and cult members because the over-the-air update is cheap and doesn't actually fix the problem. The Washington Post's headline writers didn't understand this whan they captioned the team's timeline How Tesla Autopilot got grounded:
Now, more than 2 million Tesla vehicles are receiving a software update to address “insufficient” controls to combat driver inattention while in Autopilot mode. Here’s how the recall unfolded, according to documents from Tesla, safety officials and reporting by The Washington Post.
The team did understand that Autopilot hadn't been "grounded". In Recalling almost every Tesla in America won’t fix safety issues, experts say they lay it out:
Tesla this week agreed to issue a remote update to 2 million cars aimed at improving driver attention while Autopilot is engaged, especially on surface roads with cross traffic and other hazards the driver-assistance technology is not designed to detect.

But the recall — the largest in Tesla’s 20-year history — quickly drew condemnation from experts and lawmakers, who said new warnings and alerts are unlikely to solve Autopilot’s fundamental flaw: that Tesla fails to limit where drivers can turn it on in the first place.
...
Tesla has repeatedly acknowledged in user manuals, legal documents and communications with federal regulators that Autosteer is “intended for use on controlled-access highways” with “a center divider, clear lane markings, and no cross traffic.”
The Washington Post's Geoffrey A. Fowler checked on the result of the kabuki, and wrote Testing Tesla’s Autopilot recall, I don’t feel much safer — and neither should you:
Last weekend, my Tesla Model Y received an over-the-air update to make its driver-assistance software safer. In my first test drive of the updated Tesla, it blew through two stop signs without even slowing down.

In December, Tesla issued its largest-ever recall, affecting almost all of its 2 million cars. It is like the software updates you get on your phone, except this was supposed to prevent drivers from misusing Tesla’s Autopilot software.

After testing my Tesla update, I don’t feel much safer — and neither should you, knowing that this technology is on the same roads you use.

During my drive, the updated Tesla steered itself on urban San Francisco streets Autopilot wasn’t designed for. (I was careful to let the tech do its thing only when my hands were hovering by the wheel and I was paying attention.) The recall was supposed to force drivers to pay more attention while using Autopilot by sensing hands on the steering wheel and checking for eyes on the road. Yet my car drove through the city with my hands off the wheel for stretches of a minute or more. I could even activate Autopilot after I placed a sticker over the car’s interior camera used to track my attention.
Fowler concludes "I found we have every reason to be skeptical this recall does much of anything". Good job, Tesla!

As a final note Rishi Sunak, the UK's tech-bro Prime Minister is naturally determined to make the UK a leader in autonomous vehicles with his new legislation on the subject. But, being a tech-bro, he has no idea of the fundamental problem they pose. Wolmar does understand it, writing:
In the UK, Tesla will fall foul of the legislation introduced into parliament last month, which prevents companies from misleading the public about the capability of their vehicles. Tesla’s troubles have been compounded by the revelations from ex-employee Lukasz Krupski who claims the self-drive capabilities of Teslas pose a risk to the public. Manufacturers will be forced to specify precisely which functions of the car – steering, brakes, acceleration – have been automated. Tesla will have to change its marketing approach in order to comply. So, while the bill has been promoted as enabling the more rapid introduction of driverless cars, meeting its restrictive terms may prove to be an insuperable obstacle for their developers.
Tesla's stock market valuation depends upon "misleading the public about the capability of their vehicles".

Update 12th February 2024

Source
Reinforcement for the last point above comes from Esha Dey's Tesla’s Slide Has Investors Wondering If It’s Still Magnificent and in particular from this chart comparing the history of the price-earnings ratio of the "Magnificent Seven" stocks, Alphabet, Amazon, Apple, Meta, Microsoft,Nvidia, Tesla. Dey writes:
After doubling last year, Tesla’s stock price is down 22% to start 2024. Compare that to Nvidia Corp.’s 46% surge or Meta Platforms Inc.’s 32% gain since the beginning of the year and it’s easy to see where the questions are coming from. Indeed, it’s by far the worst performer in the Magnificent Seven Index this year.

The problem for the EV maker is six of those seven companies are benefiting from the enthusiasm surrounding burgeoning artificial intelligence technology. The group hit a record 29.5% weighting in the S&P 500 last week even with Tesla’s decline, according to data compiled by Bloomberg. But despite Musk’s efforts to position his company as an AI investment, the reality is Tesla faces a unique set of challenges.

“Although Elon Musk would probably disagree, investors don’t see Tesla as an AI play like most of the other Magnificent Seven stocks,” said Matthew Maley, chief market strategist at Miller Tabak + Co. “We have a much different backdrop for Tesla and the others in the Mag Seven — the demand trend for Tesla products is fading, while it’s exploding higher for those companies that are more associated with AI.”
The problem for Tesla the car company is that increasing investment in launching the Cybertruck and a lower-cost sedan is meeting slowing demand for EVs in general:
“During the year, others in the Mag Seven were able to show how AI was driving real, profitable business growth,” Brian Johnson, former auto analyst with Barclays and founder of Metonic Advisors, said in an interview. “Tesla investors just got some random Optimus videos, Musk’s admission Dojo was a moon shot and yet another full-self-driving release that may be an improvement but still a long ways from robotaxi capability.”
Even if it actually were an AI company not a car manufacturer, Tesla's PE is out of line with other, real AI companies. Hence the need for Musk's relentless hype about autonomy and, not incidentally his demands that Tesla double his stake by diluting the sharholders and re-instate the $55B pay package Delaware court invalidated. He needs these decisions made while Tesla's PE is nearly double that of the next highest AI company, not once it is valued like a car company.

7 comments:

David. said...

Matt Levine makes the obvious point:

"I feel like Elon Musk’s recent career is a long experiment to prove that, if you are successful enough, the regular laws do not apply to you. I assume that if Musk walked into the office of the secretary of defense and snorted a bag of coke in front of him, no government contracts would be canceled. “Do you want to send up your satellites on my good rockets, or do you want to enforce your rules about drug use by government contractors,” Musk is implicitly asking, and there is an obviously correct answer. Musk is too big to fail a drug test."

David. said...

Something else that will follow the Gartner Hype Cycle is AI, as Daron Acemoglu points out in Get Ready for the Great AI Disappointment:

"In the decades to come, 2023 may be remembered as the year of generative AI hype, where ChatGPT became arguably the fastest-spreading new technology in human history and expectations of AI-powered riches became commonplace. The year 2024 will be the time for recalibrating expectations."

David. said...

Liam Denning describes the problem for Musk if doubts emerge about the AIs driving Teslas:

"Tesla is, overwhelmingly, a maker of electric vehicles, combining high growth with high margins — until recently anyway. Deliveries increased by 38% in 2023 — below the company’s long-term target of 50% per year — and the consensus for 2024 implies just 21%. Trailing 12-month net profit as of the third-quarter was actually down, year over year.

Yet in the most starry-eyed Wall Street financial models, the making and selling of vehicles — generating 92% of Tesla’s current gross profit1 — accounts for only a fraction of Tesla’s purported valuation. The rest relates to whatever Tesla’s next big thing might turn out to be, usually something related to artificial intelligence, be it robotaxis, licensed self-driving systems, the Optimus humanoid robot or just something else that might spring from the company’s Dojo supercomputing project.

Amorphous as the narrative may be, remove it and the tenuous tether between Tesla’s valuation and something approximating a potential future reality evaporates entirely."

In the linked article, Denning reports on Alex Jonas' bull case for TSLA:

"Tesla’s price of $249 equated to a market cap of $789 billion. Even in Jonas’ bull case, which values Tesla at $1.75 trillion, the core autos business — which generates more than 90% of current gross profit — accounts for only $480 billion. On that basis, even the existing market cap had an extra $300 billion of value that related to … something."

The something is the perception that Tesla is an AI technology company. The trouble is that their AI technology just isn't that good. The perception is driven by Musk's hype, and people's inability to remember all the times his hype fell flat. TSLA's forward PE is 70.94, NVDA's is 74.28. Nvidia doesn't have to make cars - it really is an AI company with world-leading technology, and a CEO with a lot more credibility than Musk. Both PEs are excessive; one is a lot more excessive than the other.

David. said...

San Francisco is getting desperate, as Trisha Thadani reports in San Francisco sues California over ‘unsafe,’ ‘disruptive’ self-driving cars:

"In the most aggressive attempt yet to reduce the number of self-driving vehicles in this city, San Francisco filed a lawsuit against a state commission that allowed Google and General Motors’ autonomous car companies to expand here this summer, despite causing a pattern of “serious problems” on the streets.

The lawsuit, which has not been previously reported and was filed in December, sends a strong message from the nation’s tech capital: autonomous vehicles are not welcome here until they are more vigorously regulated.

It’s yet another blow for the rapidly evolving self-driving car industry, which flocked to San Francisco hoping to find a prominent testing ground that would legitimize it around the United States. Instead, the two major companies — Google-owned Waymo and General Motors-Owned Cruise — have largely been cast aside by the city as an unwelcome nuisance and a public safety hazard."

Finally, someone is siding with the involuntary beta-testers.

David. said...

Simon Sharwood reports that Angry mob trashes and sets fire to Waymo self-driving car:

"An angry mob has destroyed a Waymo self-driving taxi in San Francisco.

"Waymo Vehicle surrounded and then graffiti'd [sic], windows were broken, and firework lit on fire inside the vehicle which ultimately caught the entire vehicle on fire," reads a Xeet from the San Francisco Fire Department. Nobody was in the car at the time.
...
It is not clear, however, why the mob decided to destroy the car, though it kicked off in the city's Chinatown district right where and when many folks were celebrating the Chinese New Year. With fireworks being set off all over the place, and excitement running high, the Waymo car appeared to be in the wrong place at the wrong time, and was set upon when it showed up."

I guess San Franciscans need more education about the wonders of autonomous vehicle technology.

David. said...

Sean O'Kane reports that Waymo recalls and updates robotaxi software after two cars crashed into the same towed truck:

"Waymo is voluntarily recalling the software that powers its robotaxi fleet after two vehicles crashed into the same towed pickup truck in Phoenix, Arizona, in December. It’s the company’s first recall.
...
The crashes that prompted the recall both happened on December 11. Peña wrote that one of Waymo’s vehicles came upon a backward-facing pickup truck being “improperly towed.” The truck was “persistently angled across a center turn lane and a traffic lane.” Peña said the robotaxi “incorrectly predicted the future motion of the towed vehicle” because of this mismatch between the orientation of the tow truck and the pickup, and made contact. The company told TechCrunch this caused minor damage to the front left bumper.

The tow truck did not stop, though, according to Peña, and just a few minutes later another Waymo robotaxi made contact with the same pickup truck being towed. The company told TechCrunch this caused minor damage to the front left bumper and a sensor. (The tow truck stopped after the second crash.)"

David. said...

Samuel Axon reports that After a decade of stops and starts, Apple kills its electric car project:

"After 10 years of development, multiple changes in direction and leadership, and a plethora of leaks, Apple has reportedly ended work on its electric car project. According to a report in Bloomberg, the company is shifting some of the staff to work on generative AI projects within the company and planning layoffs for some others.

Internally dubbed Project Titan, the long-in-development car would have ideally had a luxurious, limo-like interior, robust self-driving capabilities, and at least a $100,000 price tag. However, the ambition of the project was drawn down with time. For example, it was once planned to have Level 4 self-driving capabilities, but that was scaled back to Level 2+.

Delays had pushed the car (on which work initially began way back in 2014) to a target release date of 2028. Now it won't be released at all."