AI

Unmasking Political Bias in AI Chatbots

LinkedIn Google+ Pinterest Tumblr

A recently published study has revealed evidence of political bias in a widely used generative AI chatbot, signaling a growing concern in the technology sector. The study was a result of a collaboration between academics from the UK and Brazil, where ChatGPT, a popular large language model (LLM), was meticulously analyzed. The research discovered a clear tilt towards the left in the political spectrum.

Influencing the results were a set of carefully curated questions, some of which directed the chatbot to mimic the responses of individuals from certain political sects. During the course of the study, no bias was requested in a control set of queries. The findings led the researchers to suggest that ChatGPT has a noticeable lean towards Democrats in the US, Lula in Brazil, and the Labour Party in the UK.

The abstract from the study states, “These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media.”

The lead researcher of the project, Dr. Fabio Motoki from Norwich Business School, highlights the importance of neutrality in AI-powered systems, where he states, “With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible.” As a result, the political bias found presents a potential risk within political and electoral processes, due to its ability to influence user views.

In order to mitigate these issues effectively, scrutiny and regulation of these rapidly evolving technological advancements are crucial. This sentiment was echoed by Dr. Pinho Neto, the co-author of the study, who emphasized the importance of promoting transparency, accountability, and public trust within this technology.

Closely following this revelation was another study conducted by the US and China, which found similar biases in multiple variations of GPT models. Interestingly, these studies do not suggest Open AI, the company that oversees ChatGPT, is willfully manipulating the model, but rather the input choice into the primary large language model may have an inherent bias, intended or not.

Given the increasing role of the internet, and in particular, social media, in shaping electoral outcomes, these findings have launched a sense of urgency and pertinence, especially with the upcoming general elections in the US and the UK. Political figures worldwide have been grappling to gain a foothold on these influential platforms since the shocking electoral outcomes in 2016.

As internet users seek information and advice increasingly through generative AI chatbots, it is crucial that the developers behind these models strive towards neutrality and make necessary adjustments. Prudent scrutiny within this field should be embraced to ensure future electoral events maintain their integrity.

Write A Comment