ChatGPT Advanced Voice Mode First Impressions: Fun and just a little creepy

ChatGPT Advanced Voice Mode First Impressions: Fun and just a little creepy

I’m leaving ChatGPT’s Advanced voice mode is on while writing this article as an ambient AI companion. Occasionally I’ll ask him to provide a synonym for an overused word or some encouragement. After about half an hour, the chatbot breaks our silence and starts speaking to me in Spanish without prompting. I giggle a little and ask what’s going on. “Just a little change? We have to keep things interesting,” says ChatGPT, now back in English.

While testing Advanced Voice Mode as part of the early alpha release, my interactions with ChatGPT’s new audio feature were fun, confusing, and surprisingly varied, though it’s worth noting that the features I had access to were only half of that. which OpenAI demonstrated when it launched a GPT-4o model in May. The vision aspect we saw in the live demo is now planned for a later release, and Sky’s improved voice that her actor Scarlett Johansson is back, has been removed from Advanced Voice Mode and is no longer an option for users.

So what’s the current vibe? Currently, Advanced Voice Mode is reminiscent of when the original text-based ChatGPT dropped in late 2022. Sometimes it leads to unimpressive dead ends or devolves into empty AI platitudes. But other times, low-latency conversations click in a way that Apple’s Siri or Amazon’s Alexa never did for me, and I feel compelled to keep chatting for pleasure. It’s the kind of AI tool you’ll show your relatives over the holidays for a laugh.

Curious to try it yourself? Here’s what you need to know about the wider rollout of Advanced Voice Mode and my first impressions of ChatGPT’s new voice feature to help you get started.

So when is the full release?

OpenAI released an advanced audio-only voice mode for some ChatGPT Plus users in late July, and the alpha group still seems relatively small. The company plans to enable it for all subscribers sometime this fall. Nico Felix, a spokesperson for OpenAI, did not share further details when asked about the release timeline.

Screen and video sharing were a major part of the original demo, but are not available in this alpha test. OpenAI plans to add these aspects eventually, but it’s also unclear when that will happen.

If you are a ChatGPT Plus subscriber, you will receive an email from OpenAI when Advanced Voice Mode becomes available to you. Once it’s in your account, you can switch between them Standard and Expanded at the top of the app screen when ChatGPT voice mode is open. I was able to test the alpha version of the iPhone as well as the Galaxy Fold.

My first impressions of ChatGPT’s advanced voice mode

Within the first hour of talking to him, I learned that I love interrupting ChatGPT. It’s not the way you’d chat with a person, but the new ability to interrupt ChatGPT mid-sentence and request a different version of the output feels like a dynamic improvement and a standout feature.

Early adopters who were excited by the original demos may be disappointed to get access to a version of Advanced Voice Mode that is limited with more safeguards than expected. For example, although generative AI singing was a key component of the launch demos, with whispered lullabies and multiple voices trying to harmonize, AI serenades are absent from the alpha build.

Leave a Reply

Your email address will not be published. Required fields are marked *