But Can It Truly Live Up to Its ‘PhD-Level’ Claims?

OpenAI has unveiled its highly anticipated GPT-5 model, touting its ability to provide expertise on par with a PhD-level professional in various fields, including coding, writing, and more. The company’s CEO, Sam Altman, has hailed the new model as a significant leap forward, saying it’s the first time users can interact with an AI that feels like an expert in any topic. However, not everyone is convinced that GPT-5’s capabilities live up to its marketing hype.
According to Altman, GPT-5’s reasoning capabilities have improved significantly, allowing it to demonstrate its thought process and provide more accurate responses. The model has also been trained to be more honest and less prone to “hallucinations,” where AI models make up answers. OpenAI claims that GPT-5’s ability to create software in its entirety and assist coders makes it a proficient tool for professionals.
One of the key features of GPT-5 is its use of a “reasoning model,” which enables it to think more critically and solve problems more effectively. This is achieved through a combination of natural language processing (NLP) and machine learning algorithms, which allow the model to analyze and understand complex information. The result is a more human-like conversation experience, with GPT-5 able to engage in discussions that are more nuanced and insightful.
However, some experts remain skeptical about GPT-5’s capabilities. Prof. Carissa Véliz, of the Institute for Ethics in AI, notes that while the model can mimic human reasoning abilities, it still falls short of truly emulating human intelligence. “These systems haven’t been able to be really profitable, and they can only mimic human reasoning abilities,” she said. “We need to be careful not to overhype the capabilities of AI, and instead focus on developing more robust and transparent systems that can actually deliver on their promises.”
The launch of GPT-5 has also sparked concerns about the growing gap between AI capabilities and our ability to govern it. Gaia Marcus, Director of the Ada Lovelace Institute, notes that as AI becomes more advanced, the need for comprehensive regulation becomes more urgent. “As these models become more capable, the need for comprehensive regulation becomes even more urgent,” she said. “We need to develop new frameworks and guidelines that can keep pace with the rapid evolution of AI, and ensure that these technologies are developed and deployed in ways that are safe, fair, and beneficial to society.”
Meanwhile, OpenAI is facing criticism from rival firm Anthropic, which has revoked OpenAI’s access to its application programming interface (API) due to concerns that the company was using its coding tools ahead of GPT-5’s launch. An OpenAI spokesperson has disputed this claim, saying it’s within industry standards to evaluate other AI systems to assess their own progress and safety.
In a separate development, OpenAI has announced changes to its ChatGPT platform to promote a healthier relationship between users and the AI. The company has vowed to provide more nuanced responses to sensitive questions, such as relationship advice, and has pulled a recent update that made ChatGPT overly flattering. Sam Altman has acknowledged that the company’s products can have a profound impact on users, and has called for society to develop new guardrails to mitigate potential problems.
The launch of GPT-5 also highlights the growing competition in the AI space, with other companies such as Meta and Google also working on their own AI models. The development of these technologies has the potential to transform industries and revolutionize the way we live and work, but it also raises important questions about the ethics and governance of AI.
As GPT-5 rolls out to users, it remains to be seen whether it truly lives up to its “PhD-level” claims. Only time will tell if this latest AI advancement will revolutionize the way we interact with technology, or if it’s just a marketing gimmick.



