
When the Wall Street Journal reported on Monday evening that OpenAI had missed both its user-growth and revenue targets, the names that fell hardest weren’t its own — OpenAI’s shares don’t trade publicly yet — but its dependents. By Tuesday morning, Oracle was off more than 5%, CoreWeave fell 5.4%, Nvidia, Broadcom and AMD all dropped 3% to 4%, and SoftBank cratered around 10% in Tokyo. The stock most levered to OpenAI’s spending — Oracle, anchor of its US$300 billion five-year compute deal — has now spent 48 hours communicating something the AI industry has been quietly debating for months: that the demand curve flattened.
The Journal’s reporting was specific. CFO Sarah Friar has warned colleagues she’s not sure OpenAI can pay for future computing contracts at current revenue trajectories. The company missed an internal target of one billion weekly active ChatGPT users by year-end 2025, has fallen short of monthly revenue numbers more than once in 2026, and is grappling with subscriber defections — ground ceded to Anthropic in coding and to Google’s Gemini in everyday consumer use. Sam Altman and Friar called the framing “ridiculous” in a joint statement to CNBC, insisting the team is “totally aligned on buying as much compute as we can.” The market disagreed.
The contrast on the other side of the trade is stark. Anthropic’s annualized revenue run-rate hit US$30 billion in April, comfortably ahead of OpenAI’s reported US$24–25 billion, and the company has used the past fortnight to lock in the compute it had been very publicly short of. Amazon expanded its commitment by up to US$25 billion (US$5 billion now, up to US$20 billion later) and reserved 5 gigawatts of capacity. Google followed days later with up to US$40 billion — US$10 billion immediate at a US$350 billion post-money valuation, with US$30 billion in milestone-based follow-ons — and another 5 gigawatts of TPU capacity. Wilson Sonsini is advising on the listing process. Insiders refused to sell at February’s US$350 billion tender, and secondary offers have since priced the company at US$800 billion or more. Bloomberg puts the target IPO window at October.
The Pentagon Problem
How did we get here? The most popular answer in San Francisco is the Pentagon. In late February, Defense Secretary Pete Hegseth gave Anthropic chief executive Dario Amodei until 5:01pm Friday to drop product-level restrictions on autonomous weapons and domestic mass surveillance — or be designated a supply-chain risk. Amodei refused. The blowback was instantaneous: Anthropic was blacklisted from federal procurement, sued the Trump administration, won a partial injunction in California and lost on appeal in DC. But the consumer response inverted the political one. Within a week Claude had overtaken ChatGPT in the iOS App Store. ChatGPT uninstalls jumped 295% after OpenAI signed its replacement contract. Amodei, in an internal memo, called OpenAI’s deal “safety theater”; Altman, in an unusually candid AMA, conceded the “optics don’t look good.” What looked like a commercial death sentence in February has, eight weeks later, played out closer to a brand event — and a procurement gift to Anthropic’s enterprise sales team.
That brand event is now colliding with capacity. Anthropic has spent April managing what its own power users have started calling “shrinkflation”. Opus 4.7, the company’s flagship released in mid-April, ships with a new tokenizer that produces up to 35% more tokens for the same input, was set to a more verbose default, and is widely accused on GitHub and Reddit of having been quietly throttled within a week of launch. Anthropic published a candid postmortem on April 23 acknowledging three product changes that hurt Claude Code quality — a default reasoning level dropped from “high” to “medium,” a session-caching bug, and a 25-word response cap between tool calls — and reset usage limits for Claude Code subscribers. A Rate Limits API followed two days later. Most provocatively for paying users, Anthropic is currently A/B testing the removal of Claude Code from new US$20 Pro signups (around 2% of new “prosumer” signups). Head of growth Amol Avasare framed it as evidence that “our current plans weren’t built for this.” The subsidization era is encountering its first hard ceiling.
Meanwhile OpenAI’s product cadence keeps accelerating without obviously moving the needle. GPT-5.5, released April 23, took back the top of the Artificial Analysis Intelligence Index by three points on Opus 4.7 and posted a state-of-the-art 82.7% on Terminal-Bench 2.0 — but its API price doubled, its hallucination rate hit 86% on AA-Omniscience versus 36% for Opus 4.7, and SWE-Bench Pro still belongs to Anthropic. ChatGPT Images 2.0 launched two days earlier with genuine fanfare for its dense-text rendering and “thinking” capabilities, but has produced nothing resembling the user-acquisition spike of the Ghibli moment in early 2025. And Sora was discontinued on April 26 in a strategic redirect of compute toward coding and enterprise. The pattern is plainly defensive: OpenAI is shipping faster than at any point in its history, and the WSJ numbers say users aren’t compounding the way the company’s spending plan needs them to.
Which leaves the IPOs. Both companies are racing the same window — late 2026, possibly slipping into early 2027. Anthropic now has the harder operational job: scaling into roughly US$165 billion of pledged hyperscaler capital and capacity while convincing public-market investors that a US$30 billion run-rate justifies an US$800 billion-plus valuation. OpenAI has the inverse problem: defending an US$852 billion private mark while its supplier stocks bleed 5% on a single Reuters wire. Jordan Klein, TMT sector specialist at Mizuho, captured the timing tension neatly in a Tuesday note: “How new could update be as the round closed end March when the quarter would have ended. And it’s not even May 1. I highly doubt OAI fundamentals slowed that fast in under 30 days.” The implication is that whatever’s happening was already priced into the US$122 billion round — and may not be priced into the IPO.

The AI Trade Wobbles
For crypto investors, the second-order story is the spillover. The same hyperscalers underwriting both AI labs are the ones running compute on Bitcoin’s energy stack and minting the demand curve for Nvidia and Oracle that bled out on Tuesday. Naval Ravikant’s USVC fund is already trying to wrap retail access around private AI exposure precisely because the IPOs may not be the entry point investors are hoping for. The Mythos disclosure earlier this month has put cybersecurity squarely on the AI capex narrative. Bitcoin opened at US$77,368 on Tuesday before drifting lower into the third Federal Reserve meeting of the year, and the AI-trade beta has rarely felt this unstable.
Google I/O is in a month. Gemini 3.1 Pro Preview is already tied with Opus 4.7 on the Intelligence Index, and Google’s TPU stack is now financing one of Anthropic’s two five-gigawatt commitments. If Sundar Pichai uses the keynote to announce a model that beats both Anthropic and OpenAI on the same week the bankers are pricing the books, the AI power map redraws again — before either of the would-be trillion-dollar listings can print.
Whether all of this is the AI bubble popping is the wrong question, and also the only one that matters. The dot-com comparison is easy and partly wrong: AI labs have real revenue (Anthropic at a US$30 billion run-rate, OpenAI in the mid-twenties), real enterprise penetration, and demand strong enough that the most valuable private companies in the world are rationing access to their flagship products. None of that was true of Pets.com. But the comparison is partly right. The hyperscaler-to-lab-to-hyperscaler money loop — Nvidia invests in OpenAI, OpenAI commits to Oracle, Oracle buys Nvidia chips, repeat — is the kind of circular financing that gets named after the fact. SoftBank dropping 10% on a single Reuters wire is what happens when an entire equity story rests on one customer hitting a growth curve. The cycle is still expanding but its margin of safety is gone. The next leg will either compress the multiple by producing a model that obsoletes the field, or confirm — with a series of polite quarterly misses — that AI has finally become a normal sector trading on normal multiples. For crypto, a slow re-rate redirects risk-on capital without breaking the broader market; a sudden one takes Bitcoin with it.

4 hours ago
10

Bengali (Bangladesh) ·
English (United States) ·