Meta’s latest foray into the competitive world of artificial intelligence has landed with a thud rather than a thunderclap.

Over the weekend, the tech giant unveiled three new AI models—Scout, Maverick, and Behemoth, the latter still under development. Marketed as the next leap in what Meta calls “open-ish” AI, the releases were expected to make waves. Instead, they were met with skepticism and disappointment from the broader AI community.
Rather than showcasing cutting-edge innovation, critics said the models offered little to differentiate them from existing tools. Online forums such as Reddit and X (formerly Twitter) lit up with accusations of benchmark manipulation and questions surrounding discrepancies between the models’ publicly available performance and their private evaluations. One particularly persistent rumor pointed to a mysterious former Meta employee, adding further intrigue—and confusion—to the narrative.
The backlash reflects a deeper tension within the AI sector, where the rush to outperform rivals often centers on flashy benchmarks rather than real-world functionality. On this week’s TechCrunch Equity podcast, hosts Kirsten Korosec, Max Zeff, and Anthony Ha dissected the implications of Meta’s misstep.
“Creating something to do well on a test doesn’t always translate to good business,” Korosec noted, highlighting a growing disconnect between technical performance metrics and commercial viability.
Meta’s rocky rollout serves as a cautionary tale in an industry increasingly defined by hype and high expectations. As AI development accelerates, the pressure is mounting not just to innovate—but to deliver.



