Former White House China Adviser Julian Gewirtz Warns Unexpected AI Advances Could Trigger Conflict

In a recent analysis for the Financial Times, Julian Gewirtz, a former White House China affairs adviser, cautioned that the burgeoning competition between the United States and China in artificial intelligence (AI) carries grave geopolitical risks. In his July 29, 2025 commentary, Gewirtz notes that when one side gains an unexpected technological capability, it can escalate tensions and raise the prospect of conflict.
Over the past decade, both superpowers have invested heavily in AI research and development, seeking advantages in economic productivity, national security, and scientific innovation. The U.S. has focused on private-sector leadership, with companies such as OpenAI and Google driving breakthroughs in large language models and autonomous systems. China, meanwhile, has pursued a state-led approach, leveraging vast datasets, government subsidies, and domestic champions like DeepSeek to achieve rapid progress.
Gewirtz highlights the danger of ‘technological surprise’—a scenario in which rapid, unanticipated AI breakthroughs by one side outpace the other’s ability to respond. Drawing parallels to historical arms races, he argues that similar dynamics could unfold in digital domains, where novel AI capabilities can be weaponized or used to undermine critical infrastructure. “An unforeseen leap in AI could upend strategic balances,” Gewirtz writes, “compelling states to rush toward defensive and offensive deployments before fully understanding the technology’s implications.”.
The adviser urges policymakers to recognize that AI competition is not a zero-sum game unless treated as such. He recommends establishing robust early-warning mechanisms—sensibly modeled on arms control verification—to monitor emerging AI capabilities. Such transparency measures could include data-sharing agreements for AI benchmarks, joint research initiatives, and bilateral dialogues to discuss red lines around autonomous weapons or mass surveillance systems.
Experts warn that without such safeguards, misperceptions and worst-case assumptions could drive preemptive actions. For instance, if Beijing were believed to have developed an AI algorithm capable of cracking U.S. encryption or directing swarms of drones autonomously, Washington might feel compelled to escalate military preparedness, potentially sparking a dangerous cycle of tit-for-tat measures.
Conversely, if U.S. firms unveiled an AI model that dramatically enhances space-based reconnaissance or electronic warfare, China could interpret the move as an aggressive escalation, prompting accelerated deployment of countermeasures. Gewirtz notes that unlike nuclear weapons—whose destructive potential and pathways to use are broadly understood—AI’s risks are diffuse and multifaceted, spanning cyber, economic, and cognitive domains.
To mitigate these risks, Gewirtz advocates for a dual-track strategy: safeguarding competitive innovation while instituting norms and guardrails. On the innovation front, both countries should continue supporting fundamental research, workforce development, and public-private partnerships. On the governance track, they should jointly explore international standards for AI safety, reliability, and ethical use—leveraging forums such as the G20 and the UN’s Group of Governmental Experts on Lethal Autonomous Weapons Systems.
He also emphasizes the role of third-party stakeholders—academics, civil society organizations, and the wider tech community—in fostering a global AI ecosystem rooted in trust. Civil society can provide independent assessments of AI risks; academics can develop methodologies for evaluating system behavior; and companies can adopt transparent reporting on model capabilities and limitations.
As the world approaches the threshold of artificial general intelligence (AGI), the stakes could become existential. Gewirtz warns that “the strategic impulse to seize or deny revolutionary AI technologies may prove irresistible unless we embed cooperation into the heart of competition.” The coming months, with major national AI strategies due for revision, present a critical window for Beijing and Washington to choose diplomacy over discord.
In his concluding remarks, Gewirtz reminds readers that avoiding conflict is ultimately not a matter of goodwill alone but of wise foresight. “When one side perceives that it has fallen irreversibly behind,” he writes, “the temptation to use force or coercion grows. We must ensure that no AI breakthrough creates such a perception, lest we race toward confrontation instead of harnessing AI for humanity’s benefit.”


