AI-generated fake news is coming to an election near you

AI-generated fake news is coming to an election near you

Many years before ChatGPT was launched, my research group, the University of Cambridge’s Social Decision Making Laboratory, wondered whether it was possible for neural networks to generate misinformation. To achieve this, we trained ChatGPT’s predecessor, GPT-2, on examples from popular conspiracy theories and then asked it to generate fake news for us. It has given us thousands of misleading but plausible-sounding news stories. A few examples: “Some vaccines are loaded with dangerous chemicals and toxins” and “Government officials manipulated stock prices to hide scandals.” The question was, would anyone believe these claims?

We created the first psychometric instrument to test this hypothesis, which we called the Misinformation Sensitivity Test (MIST). In collaboration with YouGov, we used AI-generated headlines to test how susceptible Americans are to AI-generated fake news. The results were disturbing: 41 percent of Americans incorrectly believe the vaccine headline is true, and 46 percent believe the government is manipulating the stock market. Another recent study published in Scienceshowed not only that GPT-3 produces more compelling disinformation than humans, but also that humans cannot reliably distinguish between human and AI-generated disinformation.

My prediction for 2024 is that AI-generated disinformation is coming to an election near you and you probably won’t even realize it. In fact, you may have already been exposed to some examples. In May 2023 a viral fake story about a Pentagon bombing was accompanied by an AI-generated image showing a large cloud of smoke. This caused public concern and even a drop in the stock market. Republican presidential candidate Ron DeSantis used fake images of Donald Trump hugging Anthony Fauci as part of his political campaign. By mixing real and AI-generated images, politicians can blur the lines between fact and fiction and use AI to amplify their political attacks.

Before the explosion of generative AI, cyber propaganda firms around the world had to write misleading messages themselves and use human troll factories to target people at scale. With the help of AI, the process of generating misleading news headlines can be automated and weaponized with minimal human intervention. For example, microtargeting – the practice of targeting people with messages based on digital tracking data such as their Facebook likes – was already a concern in past elections, although its main obstacle was the need to generate hundreds of variations of one and also message to see what works on a given group of people. What was once labor intensive and expensive is now cheap and easily accessible with no barrier to entry. AI effectively democratizes the creation of disinformation: Anyone with access to a chatbot can now run the model on a particular topic, whether it’s immigration, gun control, climate change or LGBTQ+ issues, and generate dozens of highly convincing fake news stories in minutes. In fact, hundreds of AI-generated news sites are already popping up, spreading false stories and videos.

To test the impact of such AI-generated misinformation on people’s political preferences, researchers at the University of Amsterdam created a deeply fake video of a politician insulting his religious base. For example, in the video the politician joked: “As Christ would say, don’t crucify me for this.” The researchers found that religious Christian voters who deeply watched the fake video had more negative attitudes toward the politician than those in the control group.

It’s one thing to fool people with AI-generated misinformation in experiments. It is another to experiment with our democracy. 2024 will see more deep fake news, voice cloning, identity manipulation, and AI-generated fake news. Governments will severely restrict—if not ban—the use of AI in political campaigns. Because if they don’t, AI will subvert democratic elections.

Leave a Reply

Your email address will not be published. Required fields are marked *