Expert Perspectives on the OpenAI Turmoil and Q* Technology


Q* , AGI and the Altman Saga

In the aftermath of last week’s upheaval at OpenAI, attention has turned to the enigmatic Q*—a groundbreaking AI model that played a central role in the removal of CEO Sam Altman by the company’s chief scientific officer, Ilya Sutskever, and its board.

Yann LeCun, Vice President and Chief AI Scientist at Meta, provides a unique perspective on the recent developments. Dismissing what he deems as a “deluge of complete nonsense” regarding Q*, LeCun emphasises a critical challenge in advancing Language Model (LLM) reliability: the replacement of Auto-Regressive token prediction with planning. LeCun sheds light on the collective efforts of leading laboratories, including FAIR, DeepMind, and OpenAI, in addressing this challenge, with some already sharing their ideas and results. LeCun posits that Q* likely represents OpenAI’s ambitious endeavors in the realm of planning, noting the strategic hiring of Noam Brown, renowned for his work on Libratus/poker and Cicero/Diplomacy, to contribute to this innovative pursuit.

To delve into the significance of Q* and its potential impact, experts were consulted to shed light on the technology’s capabilities and the broader implications for the field of AI.

The endeavor to equip AI models with math-solving prowess has been a longstanding pursuit. While existing language models like ChatGPT and GPT-4 exhibit some mathematical aptitude, they are not entirely reliable. Wenda Li, an AI lecturer at the University of Edinburgh, underscores the current limitations in algorithms and architectures, highlighting the difficulty in consistently solving math problems using AI. Math, being a litmus test for reasoning, poses a unique challenge, demanding machines not only recognise patterns but also comprehend and reason through information.

Katie Collins, a PhD researcher specialising in math and AI at the University of Cambridge, emphasises the intricate nature of math problems, often requiring multi-step planning. Yann LeCun, chief AI scientist at Meta, suggests that Q* may represent OpenAI’s foray into planning.

Concerns about the existential risks associated with AI become pertinent when machines gain the ability to set their own goals and interact with the real world. However, as improved math capabilities bring us closer to powerful AI systems, it doesn’t necessarily signify the immediate advent of superintelligence, according to Collins. The type of math problems Q* can solve becomes crucial in understanding the scope of its capabilities.

Collins highlights the distinction between solving elementary-school math problems and pushing the boundaries of mathematics at the level of a Fields medalist. While AI systems have made strides in solving challenging problems, the mastery of elementary-school problems remains elusive.

Despite the secrecy and speculation surrounding Q*, Collins emphasises that if true, the development is a noteworthy one. A deeper understanding of mathematics could have far-reaching applications in scientific research, engineering, personalised tutoring, and aiding mathematicians in solving complex problems.

This isn’t the first time a new model has ignited AGI hype, as seen with Google DeepMind’s Gato last year. However, such cycles of excitement can divert attention from real issues surrounding AI and may impact regulations, especially as the EU finalises its AI Act.

The recent upheaval at OpenAI and the spotlight on Q* raise questions about internal governance, transparency, and the delicate balance between technological advancement and potential risks. As the tech sector faces increased scrutiny and regulation, the significance of AI breakthroughs like Q* underscores the need for responsible self-regulation by tech companies.

The emergence of Q* and the subsequent internal turbulence at OpenAI not only captivate the AI research community but also reverberate across the broader business landscape, influencing perceptions of innovation, governance, and technological risk. As companies strive to harness the potential of cutting-edge AI technologies, the events at OpenAI underscore the delicate balance they must strike between pushing the boundaries of innovation and addressing the inherent risks.

But an interesting new twist to the story suggests OpenAI may have been on the verge of a major leap forward, and that it may indeed have been related to the shakeup.

Last week, Reuters and The Information reported that some OpenAI leaders may have gotten spooked by a powerful new AI the company was working on called Q*, pronounced “Q star.” This new system was apparently seen by some as a significant step towards the company’s goal of establishing AGI, and is reportedly capable of solving grade school math problems.

According to Reuters, Mira Murati, a former OpenAI nonprofit board member who held the title of CEO for a very short period following Altman’s dismissal, acknowledged the existence of this new model in an internal message to staffers.

Reuters’ sources claim Q* was one of many factors leading to Altman’s firing, triggering concerns about commercialising a product that still wasn’t entirely understood.

While school-grade math may not sound like a groundbreaking achievement, researchers have long seen such an ability as a considerable benchmark. Instead of simply predicting the next word in a sentence like the company’s GPT systems, an AI algorithm that could solve math problems would need to “plan” several steps ahead.

Think of it as a Sherlock Holmes-like entity that can string together clues to reach a conclusion.

“If it has the ability to logically reason and reason about abstract concepts, which right now is what it really struggles with, that’s a pretty tremendous leap,” Charles Higgins, a cofounder of the AI-training startup Tromero, told Business Insider.

“Maths is about symbolically reasoning — saying, for example, ‘If X is bigger than Y and Y is bigger than S, then X is bigger than S,'” he added. “Language models traditionally really struggle at that because they don’t logically reason, they just have what are effectively intuitions.”

“In the case of math, we know existing AIs have been shown to be capable of undergraduate-level math but to struggle with anything more advanced,” Andrew Rogoyski, a director at the Surrey Institute for People-Centered AI, told BI. “However, if an AI can solve new, unseen problems, not just regurgitate or reshape existing knowledge, then this would be a big deal, even if the math is relatively simple.”

But is Q* really a breakthrough that could pose an actual existential threat? Experts aren’t convinced.

“I don’t think it immediately gets us to AGI or scary situations,” Katie Collins, a PhD researcher at the University of Cambridge, who specialises in math and AI, told MIT Technology Review.

“Solving elementary-school math problems is very, very different from pushing the boundaries of mathematics at the level of something a Fields medalist can do,” she added, referring to an international prise in mathematics.

“I think it’s symbolically very important,” Sophia Kalanovska, a fellow Tromero cofounder and PhD candidate, told BI. “On a practical level, I don’t think it’s going to end the world.”

In short, OpenAI’s algorithm — if it indeed exists and its results can withstand scrutiny — could represent meaningful progress in the company’s efforts to realise AGI, but with many caveats.

Was it the only factor behind Altman’s ousting? At this point, there’s plenty of evidence to believe there was more going on behind the scenes, including internal disagreements over the future of the company.

The extraordinary rise of AI has often been spurred on by seemingly outlandish claims, plenty of fearmongering, and a considerable amount of hype. The latest excitement surrounding OpenAI’s rumored follow-up to its GPT-4 model is likely no different.

The revelation of breakthrough technologies, such as Q*, can elicit excitement, but it also prompts a closer examination of the ethical, regulatory, and safety considerations in the rapidly evolving field of artificial intelligence. In the wake of these developments, businesses may find themselves navigating increased scrutiny from stakeholders, regulators, and the public, necessitating a proactive approach to transparency, responsible AI development, and effective governance. As the business landscape continues to intertwine with advancements in AI, how companies respond to these challenges will likely shape not only their individual trajectories but also the broader narrative surrounding the responsible deployment of transformative technologies in the digital era.


Scroll to Top