OpenAI warns that users can become emotionally addicted to its voice mode

OpenAI warns that users can become emotionally addicted to its voice mode

At the end of July OpenAI has started releasing an eerily human voice interface for ChatGPT. In a safety analysis released today, the company acknowledges that this anthropomorphic voice may entice some users to become emotionally attached to their chatbot.

The warnings are included in a “system map” for the GPT-4o, a technical document that lays out what the company believes are the risks associated with the model, plus details about safety testing and mitigation efforts the company is taking to reduce potential risk .

OpenAI has come under scrutiny in recent months after a number of employees working on the long-term risks of AI left the company. Some subsequently accused OpenAI of taking unnecessary risks and silencing naysayers in its race to commercialize AI. Revealing more details about OpenAI’s safety regime could help mitigate criticism and reassure the public that the company is taking the issue seriously.

The risks explored in the new system map are wide-ranging and include the potential for GPT-4o to heighten public prejudice, spread misinformation and aid in the development of chemical or biological weapons. It also reveals details of testing designed to ensure that the AI ​​models don’t try to break free of their controls, trick humans or plan disastrous plans.

Some outside experts praise OpenAI for its transparency, but say it could go further.

Lucie-Aimée Kaffee, an applied policy researcher at Hugging Face, a company that hosts AI tools, notes that OpenAI’s system map for GPT-4o does not include extensive details about the model’s training data or who owns that data. “The issue of consent in creating such a large dataset spanning multiple modalities, including text, image and speech, needs to be addressed,” Caffey says.

Others note that the risks may change when the tools are used in nature. “Their internal review should be only the first part of ensuring the safety of AI,” said Neil Thompson, an MIT professor who studies AI risk assessments. “Many risks only become apparent when AI is used in the real world. It is important that these other risks are cataloged and assessed as new models emerge.”

The new system map highlights how quickly AI risks are evolving with the development of powerful new features such as the OpenAI voice interface. In May, when the company revealed its voice mode, which can respond quickly and handle interruptions in natural back-and-forth motion, many users noted that it seemed overly flirtatious in demos. The company was later criticized by actress Scarlett Johansson, who accused it of copying her speaking style.

A section of the system map titled “Anthropomorphization and Emotional Dependence” explores the problems that arise when users perceive AI in human terms, something that is clearly exacerbated by the human voice mode. During red pooling or stress testing of GPT-4o, for example, OpenAI researchers noticed instances of speech from users that conveyed a sense of emotional connection to the model. For example, people have used language like “This is our last day together.”

Leave a Reply

Your email address will not be published. Required fields are marked *