As AI competition intensifies, OpenAI opens the door to loosening safety protocols—if rivals move first.

In a notable shift reflecting the escalating race in artificial intelligence development, OpenAI announced it may relax some of its safety protocols if rival companies release “high-risk” AI systems without comparable precautions.
The statement came Tuesday as part of an update to OpenAI’s Preparedness Framework—the internal guideline the company uses to assess safety standards and determine the necessary safeguards for AI model development and deployment.
“If another frontier AI developer releases a high-risk system without comparable safeguards, we may adjust our requirements,” OpenAI wrote in a blog post. However, the company emphasized that any such changes would be carefully considered, claiming it would only proceed after confirming a change in the overall risk landscape and publicly disclosing the decision.
The move comes amid growing scrutiny over how commercial AI labs balance safety with speed. Critics have accused OpenAI of compromising its own standards in pursuit of faster product rollouts. The company was recently named in a legal brief filed by 12 former employees in Elon Musk’s lawsuit against OpenAI. The brief alleges that the firm could be incentivized to further cut corners on safety as it moves ahead with a planned corporate restructuring.
OpenAI insists that its safeguards will remain “at a level more protective” even if policy adjustments are made. Still, critics remain skeptical.
A key change in the updated framework is OpenAI’s increased reliance on automated safety evaluations. While the company says human testing remains part of the process, it touts a “growing suite of automated evaluations” designed to support its faster development cycles.
That claim appears at odds with recent reporting from the Financial Times, which suggests OpenAI significantly shortened safety check windows for upcoming major model releases—giving testers less than a week in some cases. The report also indicates that many safety evaluations are being conducted on earlier, less advanced versions of the models, raising concerns about the rigor of the testing process.
Adding fuel to the controversy, some observers noted what was missing from the latest update. Steven Adler, a noted AI policy analyst, pointed out on social media that OpenAI no longer appears to require safety evaluations for finetuned models—a significant omission from its list of framework revisions.
The updated Preparedness Framework also introduces a refined risk categorization system. OpenAI now defines AI systems based on whether they reach “high” or “critical” capability thresholds. High-capability models are those that could amplify existing risks of serious harm, while critical-capability models introduce entirely new and potentially more dangerous pathways to harm, the company explained.
“Covered systems that reach high capability must have safeguards that sufficiently minimize the associated risk of severe harm before they are deployed,” OpenAI said. “Systems that reach critical capability also require safeguards that sufficiently minimize associated risks during development.”
These revisions mark the first changes to the framework since its inception in 2023. They come at a time when AI labs worldwide are under pressure to push boundaries and outpace competitors—raising fundamental questions about whether safety can keep up.
As AI capabilities grow more powerful and the stakes continue to rise, OpenAI’s shifting posture may spark further debate over how—and whether—companies should govern technologies that could reshape society.



