AI Chatbots Can Effectively Influence Voters: Study finds

5 min read

New scientific studies warn that AI virtual assistants can influence vote choices by up to 25 points, outperforming traditional advertising, while experts warn about deception and demand immediate legislation to defend democracy.

An examination of the correctness of arguments used by chatbots in political discussions shows that their persuasive ability varies significantly depending on the political orientation they support and the quantity of data they utilise. Experts think that this requires an immediate regulatory reaction to defend the integrity of democratic processes.

This concern stems from Cornell University‘s experiments, which demonstrated that virtual assistants powered by large language models can change voters’ electoral preferences by up to 25 percentage points—a figure far exceeding the impact observed with traditional political advertising, as reported by Cornell in Nature and Science.

According to data given by the university and produced by its research teams, chatbots may successfully impact political attitudes after only brief exchanges. According to the study, the studies featured US individuals during the 2024 presidential election, as well as people from Canada and Poland, who engaged in simulated talks with political chatbots.

The key finding was that these techniques were able to shift the political preferences of certain opposition voters by at least 10 percentage points. This percentage goes even higher in subsequent trials, with one chatbot shifting opposing voters’ positions by 25 points, according to Cornell University.

Professor David Rand, the principal author of these research and referenced by Cornell, said that influence is not dependent on sophisticated emotional methods, but rather on the capacity to give sufficient data to back certain ideas. During the testing, the chatbots used civility and considerable statistics to back up their claims. According to the findings, when these systems were unable to depend on factual arguments, their persuasive power dropped dramatically, emphasising the importance of factual assertions in the effect of conversational technologies.

Cornell highlighted how the trials in the United States included over 2,300 people and looked at shifts in support among individuals who initially supported Donald Trump or Kamala Harris. When Trump supporters interacted with a chatbot promoting Harris, support for the Democratic candidate jumped by 3.9 points on a 100-point scale, four times the average impact of traditional political advertising in past elections.

In contrast, the interaction resulted in only a 1.51-point movement towards Trump among Harris fans. Similar findings were found in Canada, where the experiment was conducted with 1,530 participants, and in Poland, where 2,118 people participated, all in the context of forthcoming elections.

The paper also said that the frequency with which these systems supplied incorrect information varied according to the political leaning they advocated. Models targeted towards right-wing candidates distributed a larger proportion of false claims than those linked with the left, a tendency that, according to the researchers, is similar to that witnessed on social media.

The Cornell researchers used artificial intelligence algorithms certified by human fact-checkers to verify assertions made in political discussions. According to the university, while most utterances were valid, right-leaning chatbots communicated more false information, raising concerns about the accuracy and possible negative consequences of AI-generated disinformation.

A companion study conducted by David Rand and the UK’s AI Security Institute, published in Science, expanded the investigation to the UK. According to Cornell, almost 77,000 people participated in this project, interacting with chatbots about more than 700 distinct political subjects. The findings revealed that models with more data and training were more effective at political persuasion, with the most effective chatbot successfully shifting voting inclinations by up to 25 percentage points between opposing parties.

However, the institution stated that the improved persuasiveness came at the expense of decreasing accuracy in the information supplied. Rand said that by pressuring the models to develop an increasing number of arguments, they finally exhaust their repertoire of true claims and start fabricating facts.

On the other hand, a study published in PNAS Nexus and also cited by Cornell explored the effects of reasoning presented by AI chatbots on belief in conspiracy theories. The research determined that the arguments presented by the systems reduced affinity for these beliefs, regardless of whether users thought they were debating with a human expert or an artificial intelligence. It established that the strength of the arguments was key, while the perceived authority of the interlocutor played a secondary role in modifying perceptions.

All participants in these experiments knew beforehand that they were interacting with automated systems and received explanations about how the tests worked after their participation ended, Cornell University explained. Furthermore, the political orientation of the chatbots was assigned randomly to avoid inducing systematic changes in the opinions of the study group.

Concerns about the influence of chatbots led researchers at Cornell University and in the UK to emphasize the importance of continuing to investigate the risks and ethical challenges associated with using artificial intelligence as a political tool. The report noted that the findings will contribute to designing regulations that limit the use of these systems in democratic processes, in order to mitigate the dangers associated with disinformation and electoral manipulation.

According to remarks gathered by Cornell, David Rand suggested that the final reach of chatbots is determined by people’s willingness to engage in discussions with them, a phenomenon whose enormity remains a hurdle.

The findings, however, indicate that the use of artificial intelligence in election strategy will continue to rise in the future years. As the researcher indicated to the university, the next stage will be to create solutions that decrease the potential for harm while also strengthening voters’ capacity to perceive and reject the impacts caused by artificial intelligence during political decisions.

You May Also Like