The race to hide your voice

The race to hide your voice

Second, Tomashenko says, researchers are looking at distributed and federated learning—where your data doesn’t leave your device, but machine learning models still learn to recognize speech by sharing their learning with a larger system. Another approach involves building encrypted infrastructure to protect people’s voices from eavesdropping. However, most efforts are aimed at anonymizing the voice.

Anonymization tries to keep your voice sounding human while removing as much information as possible that can be used to identify you. Efforts to anonymize speech currently involve two distinct strands: anonymizing the content of what someone says by deleting or replacing any sensitive words in files before they are saved, and anonymizing the voice itself. Most voice anonymization efforts currently involve running someone’s voice through experimental software that will change some of the parameters in the voice signal to make it sound different. This can include changing pitch, replacing segments of speech with information from other voices, and synthesizing the final result.

Does anonymization technology work? The male and female voice clips that were anonymized as part of the 2020 Voice Privacy Challenge definitely sound different. They’re more robotic, sound slightly pained, and could – at least for some listeners – be from a different person than the original voice clips. “I think it can now provide a much higher level of protection than inaction, which is the current state,” says Vincent, who has been able to reduce how easy it is to identify people in anonymizing research. However, humans are not the only listeners. Rita Singh, an associate professor at Carnegie Mellon University’s Institute for Language Technology, says that completely de-identifying a voice signal is not possible because machines will always have the potential to make connections between attributes and individuals, even connections that are not clear to humans. . “Is anonymization relative to a human listener or relative to a machine listener?” says Sri Narayanan, a professor of electrical and computer engineering at the University of Southern California.

“True anonymity is not possible without a complete change of voice,” says Singh. “When you completely change the voice, it’s not the same voice.” However, voice privacy technology is still worth developing, Singh adds, because no privacy or security system is completely secure. The iPhone’s fingerprint and facial recognition systems have been spoofed in the past, but overall, they’re still an effective method of protecting people’s privacy.

Bye, Alexa

Your voice is increasingly used as a way to verify your identity. For example, a growing number of banks and other companies are analyzing your voiceprints, with your permission, to replace your password. There is also potential for voice analysis to detect illness before other signs are apparent. But the technology to clone or fake someone’s voice is advancing rapidly.

If you have a few minutes of someone’s voice recorded, or in some cases a few seconds, it is possible to recreate that voice using machine learning—The Simpsons voice actors can be replaced by deep fake voice clones, for example. And commercial tools for recreating voices are readily available online. “There’s definitely more work in speaker identification and producing speech-to-text and text-to-speech than there is in protecting people from any of these technologies,” Turner says.

Many of the voice anonymization techniques currently being developed are still a long way from being used in the real world. When they’re ready for use, it’s likely that companies will implement tools themselves to protect their customers’ privacy—right now, there’s little people can do to protect their own voice. Avoiding calls to call centers or companies that use voice analytics and not using voice assistants can limit the volume of your voice recording and reduce possible attack opportunities.

But the greatest protection can come from lawsuits and defenses. The European GDPR covers biometric data, including people’s voices, in its privacy protections. The guidelines say that people should be informed how their data is used and give consent if they are identified, and that some restrictions on personalization should be imposed. Meanwhile, in the US, courts in Illinois – home to some of the strictest biometric laws in the country – are increasingly examining cases involving people’s voice data. McDonald’s, Amazon and Google are facing legal scrutiny over how they use people’s voice data. Decisions in these cases could set new rules for protecting people’s voices.

Leave a Reply

Your email address will not be published. Required fields are marked *