Among the whirlwind of speculation around the sudden firing and reinstatement of OpenAI CEO Sam Altman, there’s been one central question mark at the heart of the controversy. Why was Altman fired by the board to begin with?
We may finally have part of the answer, and it has to do with the handling of a mysterious OpenAI project with the internal codename, “Q*” — or Q Star. Information is limited, but here’s everything we know about the potentially game-changing developments so far.
What is Project Q*?
Before moving forward, it should be noted that all the details about Project Q* — including its existence — comes from some fresh reports following the drama around Altman’s firing. Reporters at Reuters said on November 22 that it had been given the information by “two people familiar with the matter,” providing a peek behind the curtain of what was happening internally in the weeks leading up to the firing.
According to the article, Project Q* was a new model that excelled in learning and performing mathematics. It was still reportedly only at the level of solving grade-school mathematics, but as a beginning point, it looked promising for demonstrating a previously unseen intelligence from the researchers involved.
Seems harmless enough, right? Well, not so fast. The existence of Q* was reportedly scary enough to prompt several staff researchers to write a letter to the board to raise the alarm about the project, claiming it could “threaten humanity.”
On the other hand, other attempts at explaining Q* aren’t quite as novel — and certainly aren’t so earth-shattering. The Chief AI scientist at Meta, Yann LeCun, tweeted that Q* has to do with replacing “auto-regressive token prediction with planning” as a way of improving LLM (large language model) reliability. LeCun says all of OpenAI’s competitors have been working on, and that OpenAI made a specific hire to address this problem.
Please ignore the deluge of complete nonsense about Q*.
One of the main challenges to improve LLM reliability is to replace Auto-Regressive token prediction with planning.Pretty much every top lab (FAIR, DeepMind, OpenAI etc) is working on that and some have already published…
— Yann LeCun (@ylecun) November 24, 2023
LeCun’s point doesn’t seem to be that such a development isn’t important — but that it’s not some unknown development that no other AI researchers aren’t currently discussing. Then again, in the replies to this tweet, LeCun is dismissive of Altman, saying he has a “long history of self-delusion” and suggests that the reporting around Q* don’t convince him that a significant advancement in the problem of planning in learned models has been made.
Was Q* really why Sam Altman was fired?
From the very beginning of the speculation around the firing of Sam Altman, one of the chief suspects was his approach to safetyism. Altman was the one who pushed OpenAI to turn away from its roots as a non-profit and move toward commercialization. This started with the public launch of ChatGPT and the eventual roll-out of ChatGPT Plus, both of which kickstarted this new era of generative AI, causing companies like Google to go public with their technology as well.
The ethical and safety concerns around this technology being publicly available have always been present, despite all the excitement behind how it has already changed the world. Larger concerns about how fast the technology was developing have been well-documented as well, especially with the jump from GPT-3.5 to GPT-4. Some think the technology is moving too fast without enough regulation or oversight, and according to the Reuters report, “commercializing advances before understanding the consequences” was listed as one of the reasons for Altman’s initial firing.
Although we don’t know if Altman was specifically mentioned in the letter about Q* mentioned above, it’s also being cited as one of the reasons for the board’s decision to fire Altman — which has since been reversed.
It’s worth mentioning that just days before he was fired, Altman mentioned at an AI summit that he was “in the room” a couple of weeks earlier when a major “frontier of discovery” was pushed forward. The timing checks out that this may have been in reference to a breakthrough in Q*, and if so, would confirm Altman’s intimate involvement in the project.
Putting the pieces together, it seems like concerns about commercialization have been present since the beginning, and his treatment of Q* was merely the final straw. The fact that the board was so concerned about the rapid development (and perhaps Altman’s own attitude toward it) that it would fire its all-star CEO is shocking.
To douse some of the speculation, The Verge was reportedly told by “a person familiar with the matter” that the supposed letter about Q* was never received by the board, and that the “company’s research progress” wasn’t a reason for Altman’s firing.
We’ll need to wait for some additional reporting to come to the surface before we ever have a proper explanation for all the drama.
Is it really the beginning of AGI?
AGI, which stands for artificial general intelligence, is where OpenAI has been headed from the beginning. Though the term means different things to different people, OpenAI has always defined AGI as “autonomous systems that surpass humans in most economically valuable tasks,” as the Reuters report says. Nothing about that definition has reference to “self-aware systems,” which is often what presume AGI means.
Still, on the surface, advances in AI mathematics might not seem like a big step in that direction. After all, we’ve had computers helping us with math for many decades now. But the powers given to Q* aren’t just a calculator. Having learned literacy in math requires humanlike logic and reasoning, and researchers seem to think it’s a big deal. With writing and language, an LLM is allowed to be more fluid in its answers and responses, often giving a wide range of answers to questions and prompts. But math is the exact opposite, where often there is just a single correct answer to a problem. The Reuters report suggests that AI researchers believe this kind of intelligence could even be “applied to novel scientific research.”
Obviously, Q* seems to still be in the beginnings of development, but it does appear to be the biggest advancement we’ve seen since GPT-4. If the hype is to be believed, it should certainly be considered a major step in the road toward AGI, at least as it’s defined by OpenAI. Depending on your perspective, that’s either cause for optimistic excitement or existential dread.
But again, let’s not forget the remarks from LeCun mentioned above. Whatever Q* is, it’s probably safe to assume that OpenAI isn’t the only research lab attempt the development. And if it ends up not actually being the reason for Altman’s firing as The Verge report insists, maybe it’s not as big of a deal as the Reuters report claims.