![]() |
| Source |
The key message of the graph is the contrast between the 5-year straight-line depreciation and the curves showing the value of the remaining Bitcoin that the rig will generate. I suggested that the same mismatch between straight-line depreciation and remaining value generation would apply to AI hardware. I don't claim to be the first to flag this issue; The Economist's The $4trn accounting puzzle at the heart of the AI cloud was about a month earlier.
About a month later I returned to AI economics with Mind The GAAP, but that was mostly focused on other parts of the puzzle.But now, thanks to Bryce Elder's Big tech’s $680bn buy-now-book-later problem it turns out that both Michael Burry of The Big Short and Morgan Stanley Research agree with me that there's a problem:
Below the fold I go into the details, with many thanks to Bryce Elder.
![]() |
| Source |
would unveil a fresh AI chip every year rather than every couple of years. In March its boss, Jensen Huang, remarked that “when Blackwell starts shipping in volume, you couldn’t give Hoppers away,” referring to Nvidia’s latest chips and their predecessors, respectively.Second, Camry depreciation is a curve, not a straight line, and it retains a small proportion of its initial value for a long time. This is similar to what happens to mining rigs, which are often deployed in shipping containers so that, as they become uneconomic in areas of relatively high power costs, they can be shipped to areas with cheaper power to extend their life. We haven't seen this yet with Nvidia racks, which have much more demanding environmental requirements.
Right now older hardware can still make money; because the AI frenzy means AI hardware is supply-limited people can't get the newer hardware that would make much more money. Michael Burry makes this point:
The idea of a useful life for depreciation being longer because chips from more than 3-4 years ago are fully booked confuses physical utilization with value creation. Just because something is used does not mean it is profitable. GAAP refers to economic benefits.But projecting this value for older hardware into the future involves assuming the supply constraint will continue indefinitely. Once supply constraints lessen the value they create for older hardware will vanish.
Airlines keep old planes around for overflow during Thanksgiving or Christmas, but are only marginally profitable on the planes all the same, and not worth much at all.
![]() |
| Source |
Internet companies are in denial about getting fat. The advertising silos and data miners of 10 years ago are now infrastructure-heavy and capital-intensive, but their reporting has yet to adjust to the rapid weight gain.
This mismatch may become harder to ignore as the value of their spending is written off. Morgan Stanley forecasts that, collectively, Microsoft, Oracle, Meta Platforms and Alphabet could book more than $680bn in depreciation charges over the next four years
![]() |
| Source |
Hyperscalers have since 2020 been lengthening the assumed useful lives for their servers and network equipment, with only Amazon in 2025 going the other way. The ... chart simplifies things but shows the general direction of travel:
![]() |
| Source |
Understating depreciation by extending useful life of assets artificially boosts earnings -one of the more common frauds of the modern era.The boost to earnings can be significant:
Though depreciation is a non-cash cost, the money having already been spent, any change to useful life assumptions has a big effect on GAAP income. For example, Google owner Alphabet raised earnings guidance in 2023 by $3bn by increasing the longevity of data centre equipment by a year or two.Depreciation is applied through time. What matters isn't just the rate, but also whet it starts:
The value of a data centre under construction is held on a company’s balance sheet, but depreciation is only being applied after it becomes operational. Data centres take years to build, so the delay between a capital outlay and a net income deduction can be very long.
Also: capex adds to a company's book value, showing up immediately in the cash flow statement as investments in property, plant and equipment, without a parallel increase in depreciation expense on the income statement. Comparing the former with the latter will exaggerate average asset longevity.
![]() |
| Source |
Elder is using estimate compiled by Morgan Stanley’s Accounting & Tax desk:
The team splits hyperscaler capex into AI and non AI, smooths out construction-in-progress costs, and allocates the expected lease expenses to property. The team assumes GPUs have a useful life of up to six years and that warehouses will last for 15 years.These assumptions are very generous to the hyperscalers:
- Even if we discount Nvidia's plan to switch to a one-year cadence, they assume GPUs more than two generations obsolete can generate enough income to pay for their power, cooling and space.
- The idea that the physical data center will accomodate 7 generations of racks is laughable in light of the fact that the data centers currently being built are unlikely to accomodate the next major generation of Nvidia racks. How to design a data center that will is currently a research problem.
![]() |
| Source |
Based on Morgan Stanley’s calculations, Alphabet’s depreciation expense could quadruple by its 2028 year-end. Oracle’s 2025 depreciation charge of $4bn might balloon to $56bn by 2029, which would be equivalent to 28 per cent of revenue expected by the consensus:It also looms large as a percentage of total expenses by 2029, with ORCL about 42%, META around 35%, MSFT around 32% and GOOG around 15%. Elder writes:
Those sorts of numbers challenge The Street’s assumption that, with the exception of Oracle, hyperscaler operating margins will improve over the next four years.
To gauge the potential hit to profitability, Morgan Stanley compares consensus revenue forecasts against operating expenses excluding depreciation. Its figures suggest that, to deliver what’s expected, hyperscaler costs ex depreciation need to collapse:
![]() |
| Source |
Elder adds to the skepticism:
So far, hyperscaler costs have been doing the opposite of collapsing. Meta said overnight that its total expenses would be up to 44 per cent higher this year at between $162bn and $169bn, and that 2026 capex would be higher by up to 94 per cent at between $115bn and $135bn. Microsoft also raised capex materially, saying it intends to “roughly double our total datacenter footprint over the next two years”.So much for the hyperscalers' accounting problems. The pure-play AI companies have it even worse.
The biggest of the pure-play AI companies is OpenAI. How much revenue how soon do they need? Gareth Gore reports on that in OpenAI faces financial crunch point as huge supplier bills start to come due:
More than US$80bn of deferred commitments are set to come due this year, according to bank projections – including some linked to a deal last year to purchase US$250bn of compute from Microsoft.With a British level of understatement, Gore writes:
With other contracts that OpenAI has taken out with data centres, cloud computing providers and chip manufacturers over the past few years also starting to come due, the company is facing a wall of payment demands that could amount to several hundred billion dollars between now and the end of 2030.
How a company with just US$20bn in revenues pays for that is the big question. Larger rivals like Alphabet and Meta have legacy businesses generating hundreds of billions of dollars a year that they can draw on. OpenAI, by contrast, can only survive for as long as its backers are willing to keep it afloat.
“These are very important questions right now,” said Gil Luria, head of technology research at West Coast boutique investment bank DA Davidson. “If OpenAI can’t raise the capital it needs, that will cement the fact that the only big winners from AI will be the largest mega caps.”
![]() |
| Source |
The $80B coming due this year would consume the $41B of the last capital raise plus twice last year's revenue. How is the capital raising going?:
“OpenAI was only able to raise capital primarily from one investor – SoftBank – during its last fundraise, which is one signal that there are not a lot of investors that are willing to participate at this size,” said Luria, who added that hopes of Gulf money are “a supposition at this point”.In OpenAI’s Insane Scaling Problem applies some skepticism to OpenAI claim that "its Annualised Recurring Revenue (ARR) for 2025 was $20 billion.:
OpenAI has reported that ChatGPT had 800 million weekly users by the end of 2025. Multiple third-party sources have found that only 5% of users pay for ChatGPT, so we know that roughly 40 million people do. If they all pay the $20 per month subscription, that equates to $9.6 billion a year in revenue. But we also know that roughly 30% of OpenAI’s revenue comes from other sources, like licensing. So we can infer that the current annualised revenue should be around $13.7 billion. However, ChatGPT started 2025 with far fewer weekly users, which means that its total revenue for 2025 should be substantially below $13.7 billion.Lockett's suspicion is that it came from Microsoft:
This is why I personally find that $20 billion figure hard to believe. It looks like $10 billion or more has magically dropped into OpenAI’s wallet at some point in the last quarter of 2025. Where did that come from?
Integrating Copilot into essential subscriptions and services achieves two things. Firstly, it forces this terrible technology onto us, the public, which makes their numbers look better. And secondly, it means that Microsoft can technically claim that AI drives all the revenue generated from these subscriptions and services. As such, they could send 20% of it to OpenAI, even though their models didn’t directly create this revenue.Microsoft is reportedly starting to realize that forcing "this terrible technology onto us" is causing their customers to consider fleeing from Windows 11 to Linux. So why would they ship billions to OpenAI?
I suspect this is where that $10 billion-plus figure appeared from. OpenAI desperately needed cash, and Microsoft used this method to send it over covertly.
Lockett's answer is that Microsoft and OpenAI are in a partnership and OpenAI is desperate for cash:
Multiple analyses have found that OpenAI’s operational costs will be significantly more than $28 billion in 2025. So, even with $20 billion in revenue, they are still likely miles away from breaking even, let alone creating sustainable profit.Lockett posits that the reason OpenAI desperately needs cash is that they lack economies of scale:
...
Therefore, in the best-case scenario, where this $20 billion figure is honest revenue and OpenAI’s operational costs were as predicted, they would make an $8 billion loss in 2025. That is noticeably larger than the $5 billion loss they posted in 2024. But in the worst-case scenario, where their actual revenue is closer to $10 billion and their compute costs are accurately disclosed, they might face losses in the multiple tens of billions of dollars.
...
Regarding that $5 billion loss in 2024, it was actually large enough to threaten OpenAI with bankruptcy. In fact, the only reason OpenAI survived to see 2025 was due to a $6 billion corporate bailout from its backers, mainly Microsoft. Microsoft had sunk tens of billions of dollars into OpenAI and had already begun basing much of its new direction on its partnership with OpenAI. In other words, if OpenAI went under, it would be disastrous for Microsoft. Bailing them out was likely the cheaper option, even if it damaged OpenAI’s reputation.
But LLM AIs do not follow this trend. Scale does not reduce the unit cost. Furthermore, the cost of developing these AIs is increasing exponentially as they attempt to make them more capable (read more here). So, trying to give AI a larger scale (i.e., making it useful in more areas) can dramatically increase the unit cost.Les Barclays' Who Captures the Value When AI Inference Becomes Cheap? is a long, detailed discussion of the economics of AI. He notes that:
In other words, the larger and better you try to make AI, the further away from profitability it becomes, given that costs scale up faster than revenue.
OpenAI is proving this rather beautifully. Even if they have genuinely more than doubled their annual income from 2024, which I highly doubt, their annual loss has grown by at least 33%. They are going backwards, even further into the red. You can only do that for so long before the lights are turned off.
So this means the cost per query may be increasing for frontier models even as the cost per token declines which then creates a dynamic whereby improving technology makes AI economics more challenging, defying traditional business logic. AI companies are now at the precipice of a paradox – they either maintain current while increasing inference costs to deliver better results or hold inference costs constant and risk dropping out of the race against competitors who invest more computation per query. Because of competitive dynamics pushing towards the former, it puts pressure on already challenging unit economics.If Lockett is right that $10B of OpenAI's revenue was effectively a subsidy from Microsoft, then their operations cost $28B and generated $7B in subscription revenue plus $3B in stuff like licensing. That means that OpenAI was charging subscribers around 25% of what they were costing. To make ends meet subscription costs would need to be 4 times higher, or $80/month. The number of their users who would feel this worth paying is much less than 40 million.









No comments:
Post a Comment