A landmark plan to finance and build 10GW of AI compute—roughly the output of 10 nuclear reactors—signals a new phase in the race to power frontier models

Interior view of a high-capacity data center, showcasing rows of servers with glowing green indicators, symbolizing the expansive computing infrastructure for AI advancements.

Nvidia and OpenAI have sketched the contours of what could be the defining industrial project of the AI era: a plan for the chipmaker to invest up to $100 billion in the ChatGPT maker while jointly deploying at least 10 gigawatts (GW) of artificial intelligence data‑center capacity. The multi‑year initiative—structured as both an equity investment and a sweeping procurement program—would see OpenAI buy millions of Nvidia’s AI processors as the partners roll out computing infrastructure on a scale more commonly associated with national power grids than with software companies.

The agreement, laid out in a letter of intent, places Nvidia at the center of OpenAI’s next growth phase and cements the chipmaker’s influence over the frontier‑model ecosystem. Under the terms as described by the companies and people familiar with the matter, OpenAI will pay for systems built around Nvidia’s latest accelerators, while Nvidia will acquire a substantial, non‑controlling stake in OpenAI. Funding and deployment would occur in tranches: roughly $10 billion per gigawatt as each block of capacity comes online, up to the $100 billion ceiling if the full 10GW is built.

For OpenAI, the calculus is straightforward. Training and running its largest models now requires vast pools of compute, high‑bandwidth networking, and power‑dense data halls that can support liquid‑cooled racks drawing 80 to 120 kilowatts each. By partnering directly with Nvidia—the supplier of the accelerators, interconnects, and reference system designs that underpin much of today’s AI workload—OpenAI gains a clearer supply pipeline for the components that have become Silicon Valley’s scarcest resource.

The companies say the first gigawatt of capacity could begin arriving in the second half of 2026, using Nvidia’s next‑generation platform code‑named Vera Rubin. That timeline is ambitious but not implausible given lead times for power, permits, and specialist equipment. The partners will look to spread the 10GW build across multiple regions, balancing proximity to users with access to grid interconnections, fiber backbones, and water for cooling. Sites under discussion, according to industry executives, include expansions near existing hyperscale clusters as well as greenfield campuses in the U.S., Europe, and parts of Asia.

Put simply, 10GW is a staggering number. A single modern nuclear reactor typically outputs around 1GW of electricity; a large hyperscale data‑center campus today may top out at a few hundred megawatts. Even if the Nvidia‑OpenAI plan rolls out over several years, it would place unprecedented strain on supply chains for transformers, switchgear, chillers, and fiber, and could reshape regional power markets. Utilities and regulators will be central actors in the story, brokering grid upgrades, approving new generation, and mediating community concerns over land, water, and emissions.

The financial architecture of the partnership is equally consequential. Nvidia’s commitment blurs the line between a vendor financing program and a strategic equity deal. By tying investment to delivered capacity, the company aligns upside with execution while effectively underwriting demand for its own systems. For Nvidia shareholders, the bet deepens the company’s exposure to one customer—but that customer also happens to be the standard‑bearer for frontier AI, with products that have helped define public expectations for what these models can do.

For Microsoft—the cloud giant that has ploughed tens of billions into OpenAI and provided the bulk of its computing to date—the partnership marks an inflection point. People close to the companies say Microsoft retains important commercial rights and remains a core partner, but OpenAI has been moving to diversify its compute sources amid explosive global demand. That diversification could reduce single‑provider risk for OpenAI while giving it more direct control over hardware roadmaps and deployment schedules.

The strategic logic for Nvidia is clear. As the AI market broadens from training to inference and then to full‑blown AI‑native applications, customers are asking for whole systems—GPUs, networking, software stacks, and reference designs for ‘AI factories’—rather than chips alone. By co‑financing OpenAI’s buildout, Nvidia can seed the next generation of landmark AI clusters with its own architectures, locking in a software ecosystem optimized for its accelerators and networking. The company has repeatedly argued that platform breadth, not just chip performance, is its moat.

Still, the size and structure of the deal are likely to draw scrutiny from competition authorities in the U.S. and Europe, where regulators have sharpened their focus on the concentration of compute and data in a handful of companies. Critics will ask whether a dominant hardware supplier taking a significant equity stake in a leading AI developer risks distorting markets or foreclosing rivals’ access to essential inputs. Proponents will counter that the investment expands total capacity and therefore benefits the broader ecosystem, including smaller firms that rent compute on demand.

The environmental footprint looms large. Ten gigawatts of IT load would necessitate even more grid capacity once cooling and electrical losses are factored in. The partners say they intend to pair deployments with new renewable generation and high‑efficiency cooling, including direct‑to‑chip liquid loops and heat‑re‑use schemes where feasible. Yet in power‑constrained markets, grid interconnections on this scale can take years, and communities are increasingly organized around water usage and local emissions. Expect environmental impact assessments and community benefits agreements to become fixtures of the rollout.

In parallel, OpenAI continues to explore custom silicon, working with leading foundry partners on accelerators tuned to its workloads. People familiar with those efforts stress that bespoke chips remain a long‑term hedge rather than an immediate substitute for Nvidia’s stack. The near‑term priority is assured access to compute at the scales required to train multimodal systems and agentic architectures that push beyond the capabilities of current models. That means more GPUs now—and a lot of them.

On Wall Street, investors greeted the announcement as a validation of continued GPU demand into the second half of the decade. Shares of Nvidia rose on the news, while equipment suppliers in power distribution and liquid cooling also rallied. Oracle, which has courted OpenAI workloads for its cloud, ticked higher as traders wagered that the 10GW build will include multiple cloud partners and colocation providers rather than a single proprietary fleet.

Beyond the market reaction, the deeper question is whether the world can deliver AI capacity at this pace. Every layer—from HBM memory and advanced packaging to 400‑ and 800‑gigabit networking, immersion‑ready racks, and grid‑scale substations—faces tightening constraints. Even if chip supply keeps up, power and permitting may be the harder bottlenecks. Policymakers in several countries are debating ‘compute industrial policies’ that fast‑track critical AI infrastructure in exchange for local investment, skills training, and commitments on safety and privacy.

OpenAI, for its part, frames the project as essential to advancing safe, useful AI. Executives argue that larger and more capable models can, paradoxically, improve safety by enabling better adversarial testing and red‑teaming, though this remains contested among researchers. The company has convened external advisory boards and pledged to work with regulators on standards for evaluating and deploying powerful systems. The sheer visibility of a 10GW program ensures those debates will be public and prolonged.

What happens next? Lawyers and engineers will hammer the letter of intent into binding contracts that specify delivery schedules, performance guarantees, and governance rights attached to Nvidia’s stake. Site selection will accelerate, with teams jockeying for transformer availability and grid capacity. Suppliers of chillers, pumps, and power electronics will be booked out months in advance. And in labs from San Francisco to Cambridge and Tokyo, research agendas will shift to take advantage of the promise—if not yet the reality—of abundant compute.

Whether this becomes a watershed or a warning will hinge on execution. If Nvidia and OpenAI can translate capital and components into operational, power‑efficient clusters on a global footprint, they will have built the backbone for the next wave of AI breakthroughs. If delays stack up—on chips, substations, or regulation—the project could become a case study in the physical limits of digital ambition. For now, one fact is unmistakable: the race to compute at planetary scale has entered a new phase, and two of the field’s most influential players intend to lead it.

Leave a comment

Trending