Skip to main content

All the wild things people are doing with ChatGPT’s new Voice Mode

Nothing Phone 2a and ChatGPT voice mode.
Nadeem Sarwar / Digital Trends

ChatGPT‘s Advanced Voice Mode arrived on Tuesday for a select few OpenAI subscribers chosen to be part of the highly anticipated feature’s alpha release.

The feature was first announced back in May. It is designed to do away with the conventional text-based context window and instead converse using natural, spoken words, delivered in a lifelike manner. It works in a variety of regional accents and languages. According to OpenAI, Advanced Voice, “offers more natural, real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions.

Recommended Videos

There are some limitations to what users can ask Voice Mode to do. The system will speak in one of four preset voices and is not capable of impersonating other people’s voices — either individuals or public figures.

In fact, the feature will outright block outputs that differ from the four presets. What’s more, the system will not generate copyrighted audio or generate music. So of course, the first thing someone did was to have it beatbox.

Advanced Voice as a B-boy

Yo ChatGPT Advanced Voice beatboxes pic.twitter.com/yYgXzHRhkS

— Ethan Sutin (@EthanSutin) July 30, 2024

Alpha user Ethan Sutin posted a thread to X (formerly Twitter) showing a number of Advanced Voice’s responses, including the one above where the AI reels off a short “birthday rap” and then proceeds to beatbox. You can actually hear the AI digitally breathe in between beats.

Advanced Voice as a storyteller

This is awesome actually

I did not expect the ominous sounds https://t.co/SgEPi5Bd3K pic.twitter.com/DnK8AVdWjV

— kesku (@yoimnotkesku) July 30, 2024

While Advanced Voice is prohibited from creating songs wholesale, it can generate background sound effects for the bedtime stories it recites.

In the example above from Kesku, the AI adds well-timed crashes and slams to its tale of rogue cyborg after being asked to, “Tell me an exciting action thriller story with sci-fi elements and create atmosphere by making appropriate noises of the things happening (e.g: A storm howling loudly)”.

look on OpenAI’s works ye mighty and despair!

this is most wild one. You can really feel like a director guiding a Shakespearean actor! pic.twitter.com/GUQ1z8rjIL

— Ethan Sutin (@EthanSutin) July 31, 2024

The AI is also capable of creating realistic characters on the spot, as Sutin’s example above demonstrates.

Advanced Voice as an emotive speaker

Khan!!!!!! pic.twitter.com/xQ8NdEojSX

— Ethan Sutin (@EthanSutin) July 30, 2024

The new feature sounds so lifelike in part because it is capable of emoting as a human would. In the example above, Ethan Sutin recreates the famous Star Trek II scene. In the two examples below, user Cristiano Giardina compels the AI to speak in different tones and different languages.

ChatGPT Advanced Voice Mode speaking Japanese (excitedly) pic.twitter.com/YDL2olQSN8

— Cristiano Giardina (@CrisGiardina) July 31, 2024

ChatGPT Advanced Voice Mode speaking Armenian (regular, excited, angry) pic.twitter.com/SKm73lExdX

— Cristiano Giardina (@CrisGiardina) July 31, 2024

Advanced Voice as an animal lover

🐈 pic.twitter.com/UZ0odgaJ7W

— Ethan Sutin (@EthanSutin) July 30, 2024

The AI’s vocal talents don’t stop at humans languages. In the example above, Advanced Voice is told to make cat sounds, and does so with unerring accuracy.

Trying #ChatGPT’s new Advanced Voice Mode that just got released in Alpha. It feels like face-timing a super knowledgeable friend, which in this case was super helpful — reassuring us with our new kitten. It can answer questions in real-time and use the camera as input too! pic.twitter.com/Xx0HCAc4To

— Manuel Sainsily (@ManuVision) July 30, 2024

In addition to sounding like a cat, users can pepper the AI with questions about their biological feline friends and receive personalized tips and advice in real time.

Advanced Voice as a real-time translator

Real-Time Japanese translation using #ChatGPT’s new advanced voice mode + vision alpha! Yet another useful example! pic.twitter.com/wDXrgYQkZE

— Manuel Sainsily (@ManuVision) July 31, 2024

Advanced Voice can also leverage you device’s camera to aid in its translation efforts. In the example above, user Manuel Sainsily points his phone at a GameBoy Advanced running a Japanese-language version of a Pokémon game, and has the AI read the onscreen dialog as he plays.

The company notes that video and screen sharing won’t be part of the alpha release but will be available at a later date. OpenAI plans to expand the alpha release to additional Plus subscribers “over the next few weeks” and will bring it to all Plus users “in the fall.”

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
ChatGPT’s resource demands are getting out of control
a server

It's no secret that the growth of generative AI has demanded ever increasing amounts of water and electricity, but a new study from The Washington Post and researchers from University of California, Riverside shows just how many resources OpenAI's chatbot needs in order to perform even its most basic functions.

In terms of water usage, the amount needed for ChatGPT to write a 100-word email depends on the state and the user's proximity to OpenAI's nearest data center. The less prevalent water is in a given region, and the less expensive electricity is, the more likely the data center is to rely on electrically powered air conditioning units instead. In Texas, for example, the chatbot only consumes an estimated 235 milliliters needed to generate one 100-word email. That same email drafted in Washington, on the other hand, would require 1,408 milliliters (nearly a liter and a half) per email.

Read more
How you can try OpenAI’s new o1-preview model for yourself
The openAI o1 logo

Despite months of rumored development, OpenAI's release of its Project Strawberry last week came as something of a surprise, with many analysts believing the model wouldn't be ready for weeks at least, if not later in the fall.

The new o1-preview model, and its o1-mini counterpart, are already available for use and evaluation, here's how to get access for yourself.

Read more
Google’s Gemini Live is now available for free on Android
Person holding a phone with Google Gemini Live being shown.

A month after debuting as a subscriber-only feature, Google's Gemini Live is rolling out to more of the chatbot's users free of charge, the company announced Thursday.

https://x.com/GeminiApp/status/1834269227118924098

Read more