Experts criticize Google’s limited disclosure on its latest AI model, warning that vague safety reports reflect a growing transparency gap in the race to deploy powerful AI systems.

Google is facing criticism from AI experts and policy advocates following the release of a safety report for its latest large language model, Gemini 2.5 Pro. The technical document, published Thursday, has been described as lacking crucial detail, raising concerns over the company’s transparency and commitment to AI safety.
The report comes weeks after the public launch of Gemini 2.5 Pro, billed as Google’s most advanced AI model to date. While such technical disclosures are typically viewed as valuable tools for independent researchers and policymakers, critics say Google’s latest effort falls short of industry expectations.
“This report is very sparse, contains minimal information, and came out weeks after the model was already made available to the public,” said Peter Wildeford, co-founder of the Institute for AI Policy and Strategy, in an interview with TechCrunch. “It’s impossible to verify if Google is living up to its public commitments and thus impossible to assess the safety and security of their models.”
Notably absent from the report is any mention of Google’s Frontier Safety Framework (FSF), an initiative launched last year aimed at identifying advanced AI capabilities that could pose severe risks. Experts say omitting references to the FSF and other safety evaluations raises red flags.
Unlike some of its competitors, Google only publishes safety reports for AI models once they are no longer considered “experimental.” Additionally, the company excludes results from certain “dangerous capability” tests, reserving them for a separate internal audit. That strategy, critics argue, limits the public’s ability to evaluate the safety implications of powerful models like Gemini 2.5 Pro.
Thomas Woodside, co-founder of the Secure AI Project, expressed skepticism about the company’s commitment to regular and thorough safety disclosures. “The last time Google published results of dangerous capability tests was in June 2024—for a model released four months earlier,” Woodside said. He added that the lack of a safety report for Gemini 2.5 Flash, a smaller and more efficient model unveiled last week, further undermines confidence. A Google spokesperson said a report for Flash is “coming soon.”
Woodside hopes this signals a shift toward more consistent transparency. “Those updates should include the results of evaluations for models that haven’t been publicly deployed yet, since those models could also pose serious risks,” he said.
The concerns surrounding Google are part of a broader pattern in the AI industry. Meta’s recent safety documentation for its Llama 4 model has also been described as thin, while OpenAI opted not to publish any safety report for its GPT-4.1 series.
The trend is alarming to some observers, especially in light of commitments major tech firms have made to regulators around the world. In 2023, Google told the U.S. government it would publish safety reports for all significant AI models, a promise it echoed in agreements with other nations.
“This meager documentation for Google’s top AI model tells a troubling story of a race to the bottom on AI safety and transparency,” said Kevin Bankston, senior adviser on AI governance at the Center for Democracy and Technology. “Combined with reports that competing labs like OpenAI have shaved their safety testing time from months to days, the situation is deeply concerning.”
In response to criticism, Google has stated that its internal safety testing includes “adversarial red teaming” and other protocols, even if those measures are not detailed in the publicly available reports.
As AI models grow increasingly powerful and integrated into everyday tools and platforms, experts warn that transparency will be essential—not only for safety but also for maintaining public trust. For now, many are watching closely to see whether Google and its competitors will follow through on their promises.



