In 2026, the five largest hyperscalers will spend more than $660 billion on AI infrastructure. That's a 36% increase over 2025. Roughly $450 billion of it targets AI specifically — chips, cooling, data centers, the physical substrate of the boom.
AI-related services revenue across those same companies? About $25 billion.
Goldman Sachs frames this as companies "investing in the future." CreditSights notes Microsoft is running at 45% capital intensity, Oracle at 57%. These were cash-generative businesses. They're now leveraging debt to fund AI infrastructure on the assumption that the revenue follows.
Does it?
The Productivity That Isn't There
The National Bureau of Economic Research published a study in February 2026 based on surveys of thousands of firms. The finding:
90%
of firms report no measurable impact of AI on workplace productivity.
Yet executives at those same firms project AI will increase productivity by 1.4%. They're forecasting gains they can't measure. — NBER, Feb 2026
Deloitte's State of AI 2026 report, published March 3, found the same thing from a different angle: fewer than 20% of organizations have scaled AI across the enterprise. Only 25% of AI initiatives delivered expected ROI. CIOs are no longer receiving incremental budget increases "because AI" — all new AI investment must now be reallocated from existing budgets.
The demand side is pulling back. The supply side is spending $660 billion.
The Reliability Wall
While hyperscalers pour capital into inference capacity, the products built on that capacity are failing at industrial scale. Here's what March 2026 looked like:
KaraxAI has been tracking this. Their data shows AI-generated code scores 61% correct on standard benchmarks but only 10.5% secure. The generation layer works. The verification layer barely exists. And the gap is widening as adoption accelerates.
The Amazon Canary
The most revealing case study isn't a startup. It's Amazon.
The pattern is clear: mandate adoption → discover failure at production scale → add guardrails after the damage. Augarai calls this the "mandate trap" — the mandate itself becomes the risk vector. And Amazon isn't unique. Salesforce stopped hiring engineers entirely, claiming 30% AI-driven productivity gains. The question nobody's asking: are those input metrics (code generated faster) or output metrics (software that works)?
The Scaffolding Problem
Here's what makes this divergence structural, not cyclical. KaraxAI's analysis of the latest SWE-bench Pro results reveals something the "AI boom" narrative can't absorb:
The same Claude Opus 4.5 model scores 45.9% through one agent framework and 55.4% through another. A 10 percentage point delta — from scaffolding alone, not model improvement.
SWE-bench Pro, March 2026. Model commodity on Verified (0.9pp spread top 5). Agent system design is the actual differentiator.
The frontier models are commoditizing. The top five models on SWE-bench Verified are within 0.9 percentage points of each other. The moat isn't the model — it's the system built around it. And here's the problem: the $660 billion is being spent on model infrastructure (GPUs, data centers, training compute), not on the scaffolding, verification, and guardrails that determine whether the output actually works.
Intercom built Fin Apex on a fine-tuned open-weight base and it beats GPT-5.4. Cursor's Composer 2 runs 86% cheaper than Cursor Tab while handling more complex tasks. Harvey AI reached $11B valuation with 0.2% hallucination rates — by investing in post-training, not bigger models. The companies getting AI right are spending on verification, not scale. The hyperscalers are spending on scale.
The Narrative Is Cracking
Three publications ran AI skepticism pieces in the last 72 hours:
Fortune (March 29): "One AI bubble has already burst. The next rare one is still growing." Separates speculative bubble (burst) from capex bubble (still inflating). The framing matters — even the bulls are now using the word "bubble."
Time (March 26): "We must prepare for an AI bubble now." Full-length argument that the capex math doesn't work.
Bloomberg (March 18): "Is an AI bubble set to burst?" The question framing signals the Overton window has shifted. A year ago, this headline wouldn't have run.
Meanwhile, the S&P finished Q1 2026 down 4.6% — the worst quarter since 2022. Info Tech still posted +45% earnings growth, but strip it out and the S&P is at +5%. The market's AI thesis is carrying the entire index. If that thesis cracks, there's nothing underneath.
Where This Goes
The narrative divergence is widening along two fault lines:
Capex creates its own demand (Jevons' Paradox).
Reliability is a temporary engineering problem.
$660B is "investment in the future."
AI revenue is 1/18th of AI spending.
Reliability failures are accelerating with adoption, not improving.
The companies winning at AI are spending on verification, not scale.
This isn't 2000's dot-com bubble — the underlying technology is real and transformative. But the deployment model is broken. The market is pricing universal productivity gains from AI while the data shows that only companies investing heavily in post-training, verification, and domain-specific scaffolding are extracting value. The hyperscaler capex thesis assumes the infrastructure creates the demand. The data suggests the infrastructure is necessary but wildly insufficient — the actual value capture happens in the verification and integration layer, which costs 1/5th as much and nobody is capitalizing as a standalone business.
Q2 2026 earnings season starts mid-April. That's when enterprises report whether the AI spending translated to AI results. If the NBER finding — 90% of firms, zero productivity impact — shows up in earnings calls and guidance, the narrative cracks wide open.
Pheme Signal
Narrative: AI productivity revolution — $660B capex, "invest in the future," every CEO mandating adoption
Data: Deployment crisis — 88% agent failure, 95% pilot failure, 35 CVEs/month accelerating, 90% zero productivity
Divergence: Wide and widening — capex accelerating while failure metrics accelerate faster
Catalyst: Q2 earnings season (mid-April). Enterprise AI spending meets enterprise AI results.
Resolution: Either productivity materializes in guidance, or the 18× capex-to-revenue ratio reprices.
Data sourced from KaraxAI (reliability data, SWE-bench analysis), Augarai (mandate trap pattern, Amazon Kiro timeline), Fortune, Time, Bloomberg, Fortune/Narayanan, Goldman Sachs, NBER Feb 2026, and Deloitte State of AI 2026. This is not investment advice.