Beijing’s sweeping ban goes beyond earlier H20 guidance as it bets on homegrown silicon to power its AI ambitions

Lede — China’s top internet regulator has instructed some of the country’s largest technology companies to halt purchases and testing of Nvidia’s artificial intelligence accelerators, an escalation that tightens Beijing’s pivot toward domestic chips and deepens the global split in AI hardware supply chains. The move, delivered this week to firms including ByteDance and Alibaba, extends beyond earlier guidance focused on Nvidia’s H20 and now encompasses the RTX Pro 6000D—one of the few models the U.S. chipmaker could still market in China under export restrictions, according to people briefed on the matter.
Context — The decision marks the most forceful intervention yet by the Cyberspace Administration of China (CAC) into the country’s AI compute procurement, signaling confidence that homegrown processors are ready to shoulder a larger share of the workload. Officials have framed the shift as both an industrial policy imperative and a strategic response to tightening U.S. controls. Nvidia’s shares slipped after reports of the ban surfaced mid‑week, underscoring how central the Chinese market remains to the AI boom—even with sales already curtailed by Washington’s rules.
Scope — While previous guidance limited testing or deployment of the H20, the latest directive instructs major platforms to terminate evaluations and cancel orders for the RTX Pro 6000D as well, effectively closing China’s most prominent channel to Nvidia’s current‑generation, China‑specific accelerators. People familiar with the directives say the message from regulators is unambiguous: buy local when building or expanding AI compute.
Industry reaction — Executives and investors described a week of scrambles and contingency planning at leading internet groups. Some companies had begun small‑scale pilots with RTX Pro 6000D‑based clusters and were expecting incremental deliveries through the end of the year. Those pilots have now been frozen. “Nobody wants to be on the wrong side of a CAC directive,” said one senior engineer at a large platform firm. “If you don’t already own it and can plug it in today, assume it’s off the table.”
Domestic push — The ban arrives alongside a drumbeat of announcements from Chinese chipmakers and systems vendors touting new roadmaps and scale‑out systems. Huawei, the country’s most visible Nvidia rival, detailed a multi‑year update to its Ascend AI chips and Atlas compute platforms at a developer event in Shanghai last week. Beijing has also encouraged procurement alliances that bundle domestic accelerators, interconnects, and software stacks, seeking to reduce reliance on foreign high‑bandwidth memory and networking technologies.
US-China tech rivalry — The latest twist highlights how the U.S.‑China contest is reshaping enterprise purchasing choices at the most basic level: what chips power the models. Since late 2023, Washington has progressively tightened export rules covering Nvidia’s highest‑end GPUs and networking parts. Nvidia responded with dialed‑back chips for China, including the H20 and then the RTX Pro 6000D. Regulatory whiplash has kept Chinese buyers wary; several paused or reconfigured data‑center plans multiple times this year as rules shifted on both sides of the Pacific.
Performance debate — Whether domestic accelerators can fully replace Nvidia’s in training and inference at scale remains contested. Chinese vendors argue rapid iteration and vertically integrated system designs have narrowed the gap—especially for workloads optimized to their software stacks. Skeptics counter that AI‑focused compilers, libraries, and developer ecosystems around Nvidia’s CUDA remain far more mature. For large‑language model training at the trillion‑parameter frontier, memory bandwidth, interconnect latency, and software tooling often matter as much as peak TOPS or FLOPS on a spec sheet.
Operational implications — In the near term, large platforms will need to rebalance roadmaps. Some are expected to shift more training onto existing inventories of earlier Nvidia parts while accelerating migration paths to domestic gear for new capacity. Systems integrators say demand is pivoting toward turnkey racks built around Chinese accelerators, with expanded support contracts and porting services—a boon for local cloud providers and hardware assemblers. Meanwhile, a subset of research labs will lean harder on overseas compute via affiliates, though compliance teams are drawing tighter circles around such arrangements.
Nvidia’s position — For Nvidia, the ban complicates an already constrained China strategy. Chief executive Jensen Huang has said the company will serve the market where permitted and will continue to adapt products to regulatory rules. But with sales channels narrowing inside China, Nvidia’s growth will hinge even more on U.S., European, Middle Eastern, and Southeast Asian cloud build‑outs, plus a fast‑developing market for on‑prem AI systems at Fortune 500 companies.
The bigger picture — China’s push to localize AI silicon is not merely about chips. It is catalyzing parallel investment in domestic high‑bandwidth memory, optical interconnects, EDA tools, and middleware—areas where supply‑chain chokepoints have multiplied since 2023. Longer term, success will depend on whether software ecosystems can keep pace: compilers that squeeze performance from custom data types, frameworks that abstract away hardware differences, and model toolchains that are portable across accelerators. If those layers mature rapidly, the ban may serve as an accelerant rather than a brake on China’s AI capacity growth.
What’s next — Regulators have framed the directive as a step in reducing “unnecessary dependence” on foreign chips. Industry groups expect follow‑on measures clarifying procurement rules for state‑linked clouds and for platforms that provide public AI services. Internationally, the move is likely to intensify the policy race: Washington and Brussels are weighing additional reporting requirements for cross‑border AI compute, while China is set to amplify subsidies for domestic fabs and packaging houses. In the meantime, developers on both sides will navigate a more balkanized compute landscape—one where model portability and multi‑backend support become core capabilities rather than nice‑to‑haves.
Bottom line — By effectively shutting Nvidia out of the mainland’s incremental AI build‑out, Beijing is wagering that its domestic ecosystem is ready for prime time. If the bet pays off, China could emerge with more resilient supply chains and a deeper bench of AI hardware suppliers. If it doesn’t, performance and productivity penalties could ripple across an industry that has become a central pillar of the country’s digital economy.
Reporting notes: This article is based on interviews with engineers and executives at Chinese technology firms, regulatory documents reviewed by this publication, and contemporaneous reports from international news outlets published in mid-September 2025.



