Skip to main content

Apple is tackling one of the most frustrating aspects with AI today

Apple Intelligence on AI
Apple

As companies like Google, Anthropic, and OpenAI update and upgrade their AI models, the way that those LLMs interact with users is sure to change as well. However, getting used to the new system can become a hassle for users who then have to adjust how they pose their queries in order to get the results they’ve come to expect. An Apple research team has developed a new method to streamline that upgrade transition while reducing inconsistencies between the two versions by as much as 40%.

As part of their study, “MUSCLE: A Model Update Strategy for Compatible LLM Evolution,” published July 15, the researchers argue that when upgrading their models, developers tend to focus more on upping the overall performance, rather than making sure that the transition between models is seamless for the user. That includes making sure that negative flips, wherein the new model predicts the incorrect output for a test sample that was correctly predicted by the older model, are kept to a minimum.

Recommended Videos

This is because, the study authors argue, each user has their own quirks, quibbles, and personalized ways of interacting with chatbots. Having to continually adjust and adapt the manner in which they interact with a model can become an exhausting affair — one that is antithetical to Apple’s desired user experience.

The research team even argues that incorrect predictions by the AI should remain between versions, “There is value in being consistent when both models are incorrect,” they wrote. “A user may have developed coping strategies on how to interact with a model when it is incorrect.”

Apple presents MUSCLE

A Model Update Strategy for Compatible LLM Evolution

Large Language Models (LLMs) are frequently updated due to data or architecture changes to improve their performance. When updating models, developers often focus on increasing overall performance… pic.twitter.com/ATm2zM4Poc

— AK (@_akhaliq) July 15, 2024

To address this, the researchers first developed metrics by which to measure the degree of regression between models and then developed a strategy to minimize their occurrence. The result is MUSCLE, a strategy that doesn’t require developers to retrain the entire base model and instead relies on the use of training adapters. Adapters small AI modules that can integrate at different points along the overall LLM.

Developers can then fine-tune these specific modules instead of the entire model. This enables the model as a whole to perform distinct tasks at a fraction of the training cost and with only a small increase in the number of parameters. They’re essentially plug-ins for large language models that allow us to fine-tune specific sections of the overall AI instead of the whole thing.

The research team upgraded LLMs including Meta’s Llama and Microsoft’s Phi as part of their study, using specific math queries as samples, and found that negative flips occurred as much as 60% of the time. By incorporating the MUSCLE strategy, the team wasn’t able to fully eliminate negative flips, but they did manage to reduce their occurrence by as much as 40% compared to the control.

Andrew Tarantola
Former Digital Trends Contributor
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
The ‘most powerful AI training system in the world’ just went online
Elon Musk talks to the press as he arrives to to have a look at the construction site of the new Tesla Gigafactory near Berlin.

The race for AI supremacy is once again accelerating as xAI CEO Elon Musk announced via Twitter that his company successfully brought its Colossus AI training cluster, which Musk bills as the world's "most powerful," online over the weekend.

https://x.com/elonmusk/status/1830650370336473253

Read more
I finally tried Apple Intelligence in macOS Sequoia to see if it lived up to the hype
The redeisgned Siri user interface in macOS Sequoia.

For the last few years, Apple’s macOS releases have been interesting, if not particularly exciting. But that’s all set to change this year with the launch of macOS Sequoia, and it’s all thanks to one feature: Apple Intelligence.

Apple’s artificial intelligence (AI) platform has the potential to completely change how you use your Mac on a daily basis. From generating images, rewriting emails, and summarizing your audio recordings to revamping Siri into a much more capable virtual assistant, Apple Intelligence could be the most significant new macOS feature in years.

Read more
Apparently, no one wanted to talk to Meta’s ‘creepy’ AI celebrities
Meta AI's Dungeon Master looks like Snoop Dogg.

Less than a year after its debut, Meta has quietly shuttered its celebrity chatbot program. If you hadn't noticed the AI's ignoble end, don't worry, neither did anyone else.

Last September, Meta rolled out a slew of AI experiences across its product ecosystem meant to "enhance your connections with others." In addition to the Meta AI assistant and AI-generated stickers for Instagram, Facebook introduced more than two dozen AI chatbots made to appear as a variety of celebrities and influencers, from Snoop Dogg and Tom Brady to Kendall Jenner and Naomi Osaka.

Read more