Skip to main content

What is OpenAI Q*? The mysterious breakthrough that could ‘threaten humanity’

Among the whirlwind of speculation around the sudden firing and reinstatement of OpenAI CEO Sam Altman, there’s been one central question mark at the heart of the controversy. Why was Altman fired by the board to begin with?

We may finally have part of the answer, and it has to do with the handling of a mysterious OpenAI project with the internal codename, “Q*” — or Q Star. Information is limited, but here’s everything we know about the potentially game-changing developments so far.

Recommended Videos

What is Project Q*?

Before moving forward, it should be noted that all the details about Project Q* — including its existence — comes from some fresh reports following the drama around Altman’s firing. Reporters at Reuters said on November 22 that it had been given the information by “two people familiar with the matter,” providing a peek behind the curtain of what was happening internally in the weeks leading up to the firing.

According to the article, Project Q* was a new model that excelled in learning and performing mathematics. It was still reportedly only at the level of solving grade-school mathematics, but as a beginning point, it looked promising for demonstrating a previously unseen intelligence from the researchers involved.

Seems harmless enough, right? Well, not so fast. The existence of Q* was reportedly scary enough to prompt several staff researchers to write a letter to the board to raise the alarm about the project, claiming it could “threaten humanity.”

On the other hand, other attempts at explaining Q* aren’t quite as novel — and certainly aren’t so earth-shattering. The Chief AI scientist at Meta, Yann LeCun, tweeted that Q* has to do with replacing “auto-regressive token prediction with planning” as a way of improving LLM (large language model) reliability. LeCun says all of OpenAI’s competitors have been working on, and that OpenAI made a specific hire to address this problem.

Please ignore the deluge of complete nonsense about Q*.
One of the main challenges to improve LLM reliability is to replace Auto-Regressive token prediction with planning.

Pretty much every top lab (FAIR, DeepMind, OpenAI etc) is working on that and some have already published…

— Yann LeCun (@ylecun) November 24, 2023

LeCun’s point doesn’t seem to be that such a development isn’t important — but that it’s not some unknown development that no other AI researchers aren’t currently discussing. Then again, in the replies to this tweet, LeCun is dismissive of Altman, saying he has a “long history of self-delusion” and suggests that the reporting around Q* don’t convince him that a significant advancement in the problem of planning in learned models has been made.

Was Q* really why Sam Altman was fired?

Sam Altman at the OpenAI developer conference.
OpenAI

From the very beginning of the speculation around the firing of Sam Altman, one of the chief suspects was his approach to safetyism. Altman was the one who pushed OpenAI to turn away from its roots as a non-profit and move toward commercialization. This started with the public launch of ChatGPT and the eventual roll-out of ChatGPT Plus, both of which kickstarted this new era of generative AI, causing companies like Google to go public with their technology as well.

The ethical and safety concerns around this technology being publicly available have always been present, despite all the excitement behind how it has already changed the world. Larger concerns about how fast the technology was developing have been well-documented as well, especially with the jump from GPT-3.5 to GPT-4. Some think the technology is moving too fast without enough regulation or oversight, and according to the Reuters report, “commercializing advances before understanding the consequences” was listed as one of the reasons for Altman’s initial firing.

Although we don’t know if Altman was specifically mentioned in the letter about Q* mentioned above, it’s also being cited as one of the reasons for the board’s decision to fire Altman — which has since been reversed.

It’s worth mentioning that just days before he was fired, Altman mentioned at an AI summit that he was “in the room” a couple of weeks earlier when a major “frontier of discovery” was pushed forward. The timing checks out that this may have been in reference to a breakthrough in Q*, and if so, would confirm Altman’s intimate involvement in the project.

Putting the pieces together, it seems like concerns about commercialization have been present since the beginning, and his treatment of Q* was merely the final straw. The fact that the board was so concerned about the rapid development (and perhaps Altman’s own attitude toward it) that it would fire its all-star CEO is shocking.

To douse some of the speculation, The Verge was reportedly told by “a person familiar with the matter” that the supposed letter about Q* was never received by the board, and that the “company’s research progress” wasn’t a reason for Altman’s firing.

We’ll need to wait for some additional reporting to come to the surface before we ever have a proper explanation for all the drama.

Is it really the beginning of AGI?

AGI, which stands for artificial general intelligence, is where OpenAI has been headed from the beginning. Though the term means different things to different people, OpenAI has always defined AGI as “autonomous systems that surpass humans in most economically valuable tasks,” as the Reuters report says. Nothing about that definition has reference to “self-aware systems,” which is often what presume AGI means.

Still, on the surface, advances in AI mathematics might not seem like a big step in that direction. After all, we’ve had computers helping us with math for many decades now. But the powers given to Q* aren’t just a calculator. Having learned literacy in math requires humanlike logic and reasoning, and researchers seem to think it’s a big deal. With writing and language, an LLM is allowed to be more fluid in its answers and responses, often giving a wide range of answers to questions and prompts. But math is the exact opposite, where often there is just a single correct answer to a problem. The Reuters report suggests that AI researchers believe this kind of intelligence could even be “applied to novel scientific research.”

Obviously, Q* seems to still be in the beginnings of development, but it does appear to be the biggest advancement we’ve seen since GPT-4. If the hype is to be believed, it should certainly be considered a major step in the road toward AGI, at least as it’s defined by OpenAI. Depending on your perspective, that’s either cause for optimistic excitement or existential dread.

But again, let’s not forget the remarks from LeCun mentioned above. Whatever Q* is, it’s probably safe to assume that OpenAI isn’t the only research lab attempt the development. And if it ends up not actually being the reason for Altman’s firing as The Verge report insists, maybe it’s not as big of a deal as the Reuters report claims.

Luke Larsen
Luke Larsen is the Senior Editor of Computing, managing all content covering laptops, monitors, PC hardware, Macs, and more.
California governor vetoes expansive AI safety bill
CA Gov Gavin Newsom speaking at a lecturn

California Gov. Gavin Newsom has vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Models Act, arguing in a letter to lawmakers that it "establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology."

"I do not believe this is the best approach to protecting the public from real threats posed by the technology," he wrote. SB 1047 would have required "that a developer, before beginning to initially train a covered model … comply with various requirements, including implementing the capability to promptly enact a full shutdown … and implement a written and separate safety and security protocol.”

Read more
Meta and Google made AI news this week. Here were the biggest announcements
Ray-Ban Meta Smart Glasses will be available in clear frames.

From Meta's AI-empowered AR glasses to its new Natural Voice Interactions feature to Google's AlphaChip breakthrough and ChromaLock's chatbot-on-a-graphing calculator mod, this week has been packed with jaw-dropping developments in the AI space. Here are a few of the biggest headlines.

Google taught an AI to design computer chips
Deciding how and where all the bits and bobs go into today's leading-edge computer chips is a massive undertaking, often requiring agonizingly precise work before fabrication can even begin. Or it did, at least, before Google released its AlphaChip AI this week. Similar to AlphaFold, which generates potential protein structures for drug discovery, AlphaChip uses reinforcement learning to generate new chip designs in a matter of hours, rather than months. The company has reportedly been using the AI to design layouts for the past three generations of Google’s Tensor Processing Units (TPUs), and is now sharing the technology with companies like MediaTek, which builds chipsets for mobile phones and other handheld devices.

Read more
TSMC rejects ‘Podcasting Bro’ Sam Altman’s $7 trillion fab plan
OpenAI CEO Sam Altman standing on stage at a product event.

OpenAI CEO Sam Altman may have the ear of seemingly every venture capitalist in Silicon Valley, but executives from Taiwan Semiconductor Manufacturing Company (TSMC) are far less impressed. Per a New York Times report from earlier this week, TSMC's leadership dismissed Altman as a "podcasting bro" and scoffed at his proposed $7 trillion plan to build 36 new chip manufacturing plants and AI data centers.

The news comes after Altman's ill-fated PR tour of Asian chip manufacturers last winter when he met with Samsung and SK Hynix, in addition to TSMC, in search of investment for OpenAI's artificial general intelligence goals. According to the Times, TSMC's senior leadership derided Altman after his $7 trillion (that's trillion with a "T") request.

Read more