Skip to main content

Microsoft hits another milestone in speech-recognition software accuracy

If you’re fed up with chatbots mishearing you, Microsoft is making machine ears a little more attentive. Researchers from the tech giant have achieved an impressively low error rate for speech-recognition software — just 6.3 percent, according to a paper published last week. The company hopes this milestone will help refine and personalize its AI assistant, Cortana, and features like Skype Translator.

The newest error rate of Microsoft’s conversational speech-recognition system is regarded as the lowest in the industry, according to Xuedong Huang, Microsoft’s chief speech scientist. IBM meanwhile recently announced an error rate of 6.6 percent, bettering its 6.9 percent error rate from April and the 8 percent milestone that the company achieved last year. Two decades ago, the lowest error rate of a published system was more than 43 percent, Microsoft notes in a blog post.

Recommended Videos

In artificial intelligence development, researchers often model machines of off humans by equipping the systems with the abilities to speak, see, and hear. Although Microsoft’s achievement is just 0.3 percent below IBM’s, incremental advancements like these bring machines closer to human-like capabilities. In speech recognition, the human error rate is around 4 percent, according to IBM.

“This new milestone benefited from a wide range of new technologies developed by the AI community from many different organizations over the past 20 years,” Microsoft’s Huang said.

A few of these technologies include biologically inspired systems called neural networks, a training technique known as deep learning, and the adoption of graphic processing units (GPUs) to process algorithms. Over the past two years, neural networks and deep learning have enabled AI researchers to develop and train systems in advanced speech recognition, image recognition, and natural language processing. Just last year, Microsoft created image-recognition software that outperformed humans.

Although initially designed for computer graphics, GPUs are now regularly used to process sophisticated algorithms. Cortana can process up to 10 times more data using GPUs than previous methods, according to Microsoft.

With steady advances like these, repeating your question to a chatbot may be a thing of the past.

Dyllan Furness
Former Digital Trends Contributor
Dyllan Furness is a freelance writer from Florida. He covers strange science and emerging tech for Digital Trends, focusing…
Microsoft has a new way to keep ChatGPT ethical, but will it work?
Bing Chat shown on a laptop.

Microsoft caught a lot of flak when it shut down its artificial intelligence (AI) Ethics & Society team in March 2023. It wasn’t a good look given the near-simultaneous scandals engulfing AI, but the company has just laid out how it intends to keep its future efforts responsible and in check going forward.

In a post on Microsoft’s On the Issues blog, Natasha Crampton -- the Redmond firm’s Chief Responsible AI Officer -- explained that the ethics team was disbanded because “A single team or a single discipline tasked with responsible or ethical AI was not going to meet our objectives.”

Read more
Even Microsoft thinks ChatGPT needs to be regulated — here’s why
A MacBook Pro on a desk with ChatGPT's website showing on its display.

Artificial intelligence (AI) chatbots have been taking the world by storm, with the capabilities of Microsoft’s ChatGPT causing wonderment and fear in almost equal measure. But in an intriguing twist, even Microsoft is now calling on governments to take action and regulate AI before things spin dangerously out of control.

The appeal was made by BSA, a trade group representing numerous business software companies, including Microsoft, Adobe, Dropbox, IBM, and Zoom. According to CNBC, the group is advocating for the US government to integrate rules governing the use of AI into national privacy legislation.

Read more
Elon Musk threatens to sue Microsoft over AI training
tesla and spacex ceo elon musk stylized image

Shortly after reports emerged on Wednesday that Microsoft is about to remove Twitter from its ad platform, Twitter CEO Elon Musk fired back with the threat of a lawsuit, claiming the computer giant illegally used Twitter’s data, such as users’ tweets, to train its artificial intelligence (AI) tools.

“They trained illegally using Twitter data,” Musk tweeted, adding: “Lawsuit time.”

Read more