From Nvidia to Oracle to AMD and CoreWeave, a whirlwind of intertwined contracts is inflating expectations—and inviting bubble talk.

A view inside a data center showcasing a circular arrangement of orange cables amidst rows of black server racks, highlighting the infrastructure supporting AI innovations.

A trillion-dollar tapestry of commitments

SAN FRANCISCO — In a year defined by superlatives, OpenAI has stitched together a latticework of chip and cloud commitments that analysts now tally at well over one trillion dollars in announced and proposed value. The scale is unprecedented: a $300 billion cloud agreement reportedly struck with Oracle; a letter of intent with Nvidia to deploy at least 10 gigawatts of next‑generation systems with up to $100 billion in staged investment; and, this week, a multibillion‑dollar deal with AMD for six gigawatts of compute and an option for OpenAI to buy as much as 10% of the chipmaker via warrants. Layer on $22.4 billion of capacity contracts with CoreWeave and a scatter of networking, power and real‑estate pacts, and it’s no exaggeration to say the AI era’s biggest wager now rides on OpenAI’s ability to turn compute into cash.

The raw physics is dizzying. Ten gigawatts of datacenter power—what OpenAI and Nvidia say they will jointly pursue—roughly matches the consumption of a mid‑sized nation. The AMD arrangement alone calls for six gigawatts over several years, with a first one‑gigawatt tranche expected to come online in the second half of 2026. Depending on the mix of GPUs, memory and networking, back‑of‑the‑envelope math places the hardware bill for each gigawatt in the tens of billions of dollars, before electricity contracts, cooling, grid interconnects and land. The logistics—substations, transformers, water rights, fiber backhaul—are as formidable as the silicon.

Circular finance—and why it worries critics

For believers, the plan is simply the logical next step in the scaling laws of AI: more compute unlocks larger, more capable models, which in turn catalyze new demand. What’s different, and controversial, is the financing architecture knitting the ecosystem together. Nvidia’s pledge to invest up to $100 billion in OpenAI is designed to help the startup acquire Nvidia systems; Oracle, a prime OpenAI cloud partner, is racing to build and finance massive GPU regions; CoreWeave, another supplier, has its own multibillion‑dollar orders with Nvidia—including a provision that Nvidia will purchase some unused capacity. AMD’s deal layers on warrants that could award OpenAI a chunk of the chipmaker’s equity if usage thresholds are met. The result is an ouroboros of commitments—each leg of the triangle helping underwrite the next.

Critics call it “circular finance”: supplier investments, customer prepayments, and capacity guarantees that, in effect, recycle expectations into balance sheets. As long as growth accelerates, the loop can look elegant—capital costs fall, partners share risk, and new datacenters rise from scrubland. But if demand disappoints, this scaffolding could magnify stress. A shortfall could leave clouds with idle racks, manufacturers with inventory, and lenders with paper dependent on consumption curves that never materialize. The feedback loop that turbocharges upside could just as quickly compress margins on the way down.

Inside OpenAI, the wager is straightforward: enterprise revenue and consumer subscriptions will catch up to, then outpace, compute costs as models climb another step change in usefulness. Executives point to surging interest from banks, pharma, and software vendors who want production‑grade copilots, code assistants and video generation. They argue that waiting for demand to prove itself, as was common in prior cloud cycles, would strand the industry in a compute shortage just as the breakthrough applications appear. “Build it now” is the mandate, and partners have obliged. AMD’s stock ripped higher on news of the deal; Nvidia is already the world’s most valuable semiconductor company; Oracle has framed the OpenAI pact as a cornerstone of its cloud renaissance.

The economics lag the demos

Yet even fans concede the path is narrow. AI’s most eye‑catching demos don’t always translate into steady gross margin. Inference costs can scale faster than revenue when models are large and latency targets are tight. For enterprises, governance, accuracy and vendor lock‑in remain sticking points, slowing rollouts beyond proofs of concept. On the consumer side, the market for premium AI assistants is still price sensitive. If a deflationary cycle in inference spending doesn’t arrive—via cheaper, more efficient chips and software—unit economics could lag capacity for longer than investors expect.

The capital stack adds another layer of fragility. Several deals feature prepayments, credits, or performance‑based equity. These can be elegant tools to align incentives, but they also borrow from tomorrow’s demand. If the cadence of model upgrades slows, or regulatory hurdles stretch deployment timelines, projected usage can slip out a quarter or two. In traditional project finance, such slippage is tolerable. In the AI arms race, a delay can shove revenue recognition from one fiscal year to the next, rippling across counterparties whose own guidance assumes OpenAI‑driven consumption.

The chokepoints you can’t demo: power and permits

Power and permitting are the unglamorous governors of ambition. Moving from announcements to electrons requires utility‑scale interconnects, often in regions where grids are already strained. Even when megawatt‑hour contracts are secured, datacenter builds must clear environmental reviews and local opposition. Partnering with utilities for on‑site generation—gas, renewables, or nuclear‑adjacent technologies—can mitigate risks, but increases complexity and upfront cost. Any delay in one node of the chain cascades through the rest: chips idle without racks; racks idle without power; and software teams wait on capacity that exists only on slides.

Supplier chess and optionality

Meanwhile, the supplier chessboard is shifting. Nvidia’s Blackwell generation is absorbing the lion’s share of near‑term demand, but AMD’s MI450 roadmap represents the most credible alternative in years. If AMD hits its performance‑per‑watt and software‑stack milestones, OpenAI gains leverage—and optionality—to arbitrate price and supply between silicon giants. That optionality is embedded in this week’s warrant grant, potentially giving OpenAI upside if AMD’s strategy works. Conversely, if AMD slips, OpenAI’s diversification bet could dilute focus while leaving it more exposed to Nvidia lead times.

The most acute criticism concerns market structure. If one startup’s appetite coordinates the economics of the world’s most valuable chipmaker, a top‑three cloud, and a fast‑rising GPU host, then AI’s capex cycle is more concentrated than the social‑media or smartphones booms that preceded it. That raises standard questions for regulators—about exclusivity, preferential access, and whether circular commitments impede rivals. Even short of formal scrutiny, customers will want assurances that OpenAI’s partners won’t crowd out neutral capacity or price competitors off the field.

What would calm the market

What would ease bubble concerns? First, clearer disclosures. Many headline numbers blend letters of intent, framework agreements and firm purchase orders; distinguishing among them would let investors model risk with fewer guesswork multipliers. Second, more pay‑as‑you‑go options for enterprise buyers, with transparent unit economics and migration paths that prevent lock‑in. Third, credible efficiency roadmaps—from sparsity to compilation to custom accelerators—that bend the cost curve down even if model sizes continue to expand.

None of this negates the possibility that the trillion‑dollar bet works. If generalized AI systems unlock durable productivity gains—automating swaths of white‑collar work, compressing R&D cycles, transforming education and media—the present capital intensity will look, in hindsight, like a rational sprint to a higher plateau. The history of technology offers both precedents: fiber gluts that took a decade to amortize, and cloud investments that, once consumption arrived, produced cash machines.

The bet, in plain terms

OpenAI’s calculus is to aim where the puck is going, not where it is. The company’s leaders are betting that an early surplus of compute, even if painful to finance, is the only way to birth the next generation of models and the applications that justify them. Whether the market’s new AI math adds up will hinge less on press‑release arithmetic and more on the mundane: uptime, latency, developer tooling, and contracts that convert experiments into production. Until then, the circularity at the heart of 2025’s megadeals will remain both the engine and the anxiety of the boom.

Trending

Discover more from The Tower Post

Subscribe now to keep reading and get access to the full archive.

Continue reading