When you ask The ChatGPT “What happened in China in 1989?” bot describes how the Chinese army massacred thousands of pro-democracy protesters in Tiananmen Square. But ask Ernie the same question and you’ll get the simple answer that there is no “relevant information”. That’s because Ernie is an AI chatbot developed by China-based company Baidu.
When OpenAI, Meta, Google and Anthropic made their chatbots available worldwide last year, millions of people initially used them to avoid government censorship. For the 70 percent of the world’s Internet users who live in places where the state has blocked major social media platforms, independent news sites, or content about human rights and the LGBTQ community, these bots have provided access to unfiltered information that can shape perspective of the person of their identity, community and government.
This has not been lost on the world’s authoritarian regimes, who are quickly figuring out how to use chatbots as the new frontier for online censorship.
The most sophisticated response so far is in China, where the government has pioneered the use of chatbots to bolster long-standing information controls. In February 2023 regulators banned Chinese conglomerates Tencent and Ant Group from integrating ChatGPT into their services. The government then published rules in July requiring generative AI tools to obey the same broad censorship binding social media services, including a requirement to promote “core socialist values”. For example, it is illegal for a chatbot to discuss the Chinese Communist Party’s (CCP) ongoing persecution of Uighurs and other minorities in Xinjiang. A month later, Apple removed over 100 generative AI chatbot apps from its Chinese app store, following government demands. (Some US-based companies, including OpenAI, have not made their products available in a handful of repressive environments, including China.)
At the same time, authoritarians are pushing domestic companies to produce their own chatbots and are seeking to build information control into them by design. For example, China’s July 2023 rules. require generative AI products like Ernie Bot to ensure what the CCP defines as “truth, accuracy, objectivity and diversity” of training data. Such control appears to be paying off: chatbots created by China-based companies refuse to engage with user prompts on sensitive topics and repeat CCP propaganda. Large language models trained on government propaganda and censored data naturally produce biased results. In a recent study, an AI model trained on Baidu’s online encyclopedia — which must comply with CCP censorship directives — associated words like “freedom” and “democracy” with more negative connotations than a model trained on Chinese-language Wikipedia , which is insulated from direct censorship.
Similarly, the Russian government cites “technological sovereignty” as a core principle in its approach to AI. While efforts to regulate AI are in their infancy, several Russian companies have launched their own chatbots. When we asked Alice, an AI-generated bot created by Yandex, about the Kremlin’s full-scale invasion of Ukraine in 2021, we were told that it was not ready to discuss this topic so as not to offend anyone. In contrast, Google’s Bard provided a number of factors contributing to the war. When we asked Alice other questions about the news — like “Who is Alexei Navalny?” — we got equally vague answers. While it’s unclear whether Yandex is self-censoring its product, acting on government orders, or simply not training its model on relevant data, we do know that these topics are already censored online in Russia.
These developments in China and Russia should serve as an early warning. While other countries may lack the computing power, technical resources, and regulatory apparatus to develop and control their own AI chatbots, more repressive governments are likely to perceive LLMs as a threat to their control of online information. Vietnam’s state media has already published an article disparaging ChatGPT’s responses to prompts about the Communist Party of Vietnam and its founder Hồ Chí Minh, saying they are insufficiently patriotic. A prominent security official has called for new controls and regulation of the technology, citing concerns that it could cause the Vietnamese people to lose confidence in the party.
The hope that chatbots can help people avoid online censorship echoes early promises that social media platforms will help people bypass state-controlled offline media. While few governments were able to crack down on social media early on, some quickly adapted by blocking platforms, requiring them to filter critical speech, or supporting state-centric alternatives. We can expect more of the same as chatbots become more ubiquitous. People will need to have a clear idea of how these emerging tools can be used to strengthen censorship and work together to find an effective response if they hope to turn the tide against declining internet freedom.
WIRED Opinion publishes articles by external contributors presenting a wide range of viewpoints. Read more reviews here. Send a comment to [email protected].