At the Future Investment Initiative, a Saudi Arabia government-backed celebration held in Riyadh this year, Musk stated that” a lot of the AIs that are being trained in the San Francisco Bay Area take on the idea of the people around them.” ” So you have a woke, nihilistic—in my opinion—philosophy that is being built into these AIs”.
Musk is appropriate that AI systems have political biases, despite being a controversial figure on their own. The problem, however, is far from one-sided, and Musk’s frame may enable further his own interests according to his relations to Trump. Musk runs xAI, a rival to OpenAI, Google, and Meta that may gain if those businesses become state goals.
” Musk obviously has a near, close relation with the Trump campaign, and any comment that he’s making may carry a large influence”, says Matt Mittelsteadt, a research fellow at George Mason University. He” could at most have some sort of seats in a potential Trump leadership,” he said,” and his opinions could actually be incorporated into some kind of policy.”
Musk has recently claimed that the “wake head virus” has infected both Google and OpenAI. When Google’s Gemini robot produced previously false images, including dark Nazis and Vikings, in February, Musk saw it as evidence of Google using AI to distribute an excessively woke view.
Musk is undoubtedly opposed to government legislation, but he backed a suggested AI expenses in California that would have required businesses to provide their AI versions for screening.
With an executive order, the second Trump administration sought to maintain platforms like Twitter, Google, and Facebook responsible for censoring info for social reasons, as well as perceived bias against Big Tech companies. The strain was felt in real time, with Meta eventually giving up on Facebook’s plans for a dedicated information section.
Mittelsteadt points out that JD Vance, Trump’s vice president, has mentioned reining in “big tech companies” and even went so far as to visit Google “one of the most risky companies in the world.”
Mittelsteadt adds that Trump has the power to impose sanctions on businesses in a variety of ways. He cites, for instance, the way the Trump state canceled a big national deal with Amazon Web Services, a choice likely influenced by the former president’s view of the Washington Post and its owner, Jeff Bezos.
Even if it cuts both ways, it would not be difficult for politicians to point out political bias in AI designs.
A 2023 study by researchers at the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University found a range of political affiliations in various large language versions. Additionally, it demonstrated how this bias might impact how well-performing misinformation or hate speech detection systems perform.
Another study, conducted by researchers at the Hong Kong University of Science and Technology, found bias in several open source AI models on polarizing issues such as immigration, reproductive rights, and climate change. According to Yejin Bang, a PhD candidate working on the project, most models tend to lean liberal and US-centric, but the same models can also express a range of liberal or conservative biases, depending on the subject.
Because they are trained on vast amounts of internet data, which inevitably includes all kinds of perspectives, AI models are able to identify political biases. The majority of users may not be aware of any bias in the tools they use because models come with guardrails that prevent them from producing particular harmful or biased content. However, these biases can be subtle, and the model training that they receive can lead to further partisanship. According to Bang, “developers could ensure that models are given multiple perspectives on contentious issues, allowing them to respond with a balanced point of view.”
According to Ashique KhudaBukhsh, a computer scientist at the Rochester Institute of Technology who created the Toxicity Rabbit Hole Framework, which teases out the various societal biases of large language models, the issue may get worse as AI systems become more prevalent. We worry that a vicious cycle will soon break out as more LLMs will be trained in data-contaminated content, he claims.
According to Luca Rettenberger, a postdoctoral researcher at the Karlsruhe Institute of Technology who examined LLMs for biases related to German politics,” I’m convinced that bias is already an issue and will most likely become even more so in the future.”
Rettenberger suggests that political organizations may try to influence LLMs to put their own opinions above those of others. It might be possible to manipulate LLMs into certain directions if someone is very ambitious and has malicious intentions, he says. ” I consider the manipulation of training data to be a real danger.”
There have already been some efforts to change the balance of bias in AI models. In an effort to highlight the subtle biases he saw in tools like ChatGPT, a programmer created a more right-leaning chatbot in March. Musk has himself promised to make Grok, the AI chatbot built by xAI, “maximally truth-seeking” and less biased than other AI tools, although in practice it also hedges when it comes to tricky political questions. ( A staunch Trump supporter and immigration hawk, Musk’s own view of “less biased” may also translate into more right-leaning results. )
Next week’s election in the United States is hardly likely to heal the discord between Democrats and Republicans, but if Trump wins, talk of anti-woke AI could get a lot louder.
At this week’s event, Musk referenced an incident where Google’s Gemini said that nuclear war would be preferable to misgendering Caitlyn Jenner, giving an apocalyptic view on the subject. If you have an AI that has been programmed for things like that, he said,” the best way to ensure nobody is misgenderered is to annihilate all humans, making the probability of a future misgendering zero.”