Mittelstedt adds that Trump can punish companies in different ways. He cited, for example, how the Trump administration canceled a major federal contract with Amazon Web Services, a decision likely influenced by the former president’s opinion of the Washington Post and its owner, Jeff Bezos.
It wouldn’t be hard for politicians to point to evidence of political bias in AI models, even if it goes both ways.
2023 survey of researchers from the University of Washington, Carnegie Mellon University and Xi’an Jiaotong University found a number of political leanings in various major language patterns. He also showed how this bias can affect the effectiveness of hate speech or disinformation detection systems.
Another study by researchers at the Hong Kong University of Science and Technology found biases in several open-source AI models on polarizing issues such as immigration, reproductive rights and climate change. Yejin Bang, a doctoral student involved in the work, says that most models tend to be liberal and pro-US, but that the same models can express different liberal or conservative biases depending on the topic.
AI models capture political bias because they are trained on reams of Internet data that inevitably include all kinds of viewpoints. Most users may not be aware of any bias in the tools they use because the models include safeguards that restrict them from generating certain harmful or biased content. However, these biases can leak out subtly, and the extra training the models receive to limit their output can introduce additional bias. “Developers can ensure that models are exposed to multiple perspectives on divisive topics, allowing them to respond with a balanced perspective,” says Bang.
The problem could get worse as AI systems become more widespread, says Ashiq HudaBukhsh, a computer scientist at the Rochester Institute of Technology who developed a tool called the Toxicity Rabbit Hole Framework that reveals the various societal biases of large language models. “We fear that a vicious cycle is about to begin, as new generations of LLMs will increasingly be trained on data contaminated by AI-generated content,” he says.
“I am convinced that this bias in the LLM is already a problem and will most likely be even bigger in the future,” says Luca Retenberger, a postdoctoral researcher at the Karlsruhe Institute of Technology who conducted an analysis of the LLM for biases related to German politics.
Rettenberger suggests that political groups may also seek to influence LLMs to promote their own views over those of others. “If someone is very ambitious and has malicious intent, it may be possible to manipulate the LLM in certain directions,” he says. “I see the manipulation of training data as a real danger.”
There are already some efforts to change the balance of bias in AI models. Last March, a developer developed a more right-wing chatbot in an effort to highlight the subtle biases he saw in tools like ChatGPT. Musk himself has promised to make Grok, an AI chatbot built by xAI, “maximally truth-seeking” and less biased than other AI tools, though in practice he’s also hedging when it comes to tough political issues. (A staunch Trump supporter and immigration hawk, Musk’s own “less partisan” view may also translate into more right-wing results.)
Next week’s election in the United States is unlikely to heal the rift between Democrats and Republicans, but if Trump wins, the talk of anti-awakening AI could get a lot louder.
Musk offered an apocalyptic take on the matter at this week’s event, citing an incident where the Google twins said nuclear war would be preferable to Caitlyn Jenner’s sexual misconduct. “If you have an artificial intelligence that is programmed for such things, it may conclude that the best way to ensure that no one is misgendered is to destroy all humans, thereby making the likelihood of future distortion of gender to be zero,” he said.