OpenAI announced that it has implemented a new version of its GPT-4o large language model to drive its ChatGPT chatbot, but it has declined to specify exactly how the updated model differs from its predecessor.
“To be clear, this is an improvement to GPT-4o and not a new frontier model,” the company posted on X (formerly Twitter) Monday.
there's a new GPT-4o model out in ChatGPT since last week. hope you all are enjoying it and check it out if you haven't! we think you'll like it 😃
— ChatGPT (@ChatGPTapp) August 12, 2024
“We’ve introduced an update to GPT-4o that we’ve found, through experiment results and qualitative feedback, ChatGPT users tend to prefer,” the company wrote in its Model Release Notes. “It’s not a new frontier-class model. Although we’d like to tell you exactly how the model responses are different, figuring out how to granularly benchmark and communicate model behavior improvements is an ongoing area of research in itself (which we’re working on!).”
In the absence of specific details from the company, many users have, unsurprisingly, begun speculating as to the nature of the changes and whether they represent new features. X user @misaligned_agi took to social media to venture that the new update had implemented a multistep reasoning method rather than an entirely new model.
OpenAI quickly put the kibosh on that line of thinking, with a spokesperson telling VentureBeat that it wasn’t in fact a new reasoning process and that the behavior that @misaligned_agi observed could have been triggered by the structure of their prompt.
Other users also voiced their theories on social media, arguing that GPT-4o recently began behaving in subtly different and better ways, and that its image-generating quality had improved. “For the first time in a long time, it provided better ‘vibes’ on an output than 3.5 Sonnet,” observed X user @mattshumer_.
After allowing users to take their best guesses at defining the new iteration, which OpenAI is calling ChatGPT-4o-latest, the company added a few scant details on its Models page on Wednesday.
Described as a “dynamic model continuously updated to the current version of GPT-4o in ChatGPT,” GPT-4o-latest has a knowledge cutoff of October 2023, and can accommodate 128,000 tokens, or 96,000 words, per conversation, just as the previous GPT-40 version did. It can output 16,384 tokens, or 12,288 words, on par with the newer GPT-40-mini model and roughly quadruple what the older GPT-4o can do.
Unfortunately, hard stats like these don’t provide insight into what the new model is actually capable of, and, apparently, neither will its developers.