In the lead-up to last year’s presidential election, a significant experiment involving over 2,000 American participants aimed to explore the potential of artificial intelligence (AI) in influencing political opinions. The experiment involved engaging individuals with chatbots that advocated for either Kamala Harris or Donald Trump. Researchers aimed to observe if conversations with these AI models could sway voter preferences.
The findings were noteworthy. The pro-Trump chatbot managed to convert one in 35 participants who had initially expressed no intention of voting for him, while the pro-Harris chatbot was even more persuasive, flipping one in 21 participants. Notably, a follow-up survey conducted a month later revealed that these changes in voter inclination largely persisted. This indicated that AI could create substantial avenues for influencing beliefs and behaviors, as stated by David Rand, a senior author on the study published in Nature.
Rand and his team ventured beyond the U.S. context, testing the persuasive capabilities of AI bots in contentious elections in Canada and Poland. The results astounded Rand, who noted that about one in ten participants across these nations also indicated a willingness to change their vote following interactions with chatbots. These AI models adopted a firm yet cordial approach, offering persuasive arguments and evidence favoring their respective candidates. Rand posited that if such persuasive techniques could be scaled, they might significantly impact electoral outcomes.
The effectiveness of the chatbots stemmed from a straightforward method of persuasion. A companion study co-authored by Rand, published in Science, examined what made some chatbots more effective than others. Interestingly, the research revealed that the most convincing chatbots were those that presented the highest number of “fact-like” claims, irrespective of their accuracy. The most persuasive bots were often the least reliable, implying that sheer volume of information could outweigh factual correctness.
Experts have noted these studies are part of a growing body of evidence indicating that generative AI models possess substantial persuasive power. These chatbots, characterized by their patient demeanor and wealth of information, are perceived by many as trustworthy. Nonetheless, some researchers cautioned that the settings of these studies might not reflect real-world scenarios where individuals discuss their voting preferences with chatbots, especially without compensation for their participation.
Jordan Boyd-Graber, an AI researcher, pointed out that the efficacy of AI might differ from traditional forms of persuasion like pamphlets or human canvassing. While conventional campaign outreach methods have had limited success in swaying voters, the new research implies that the chatbots could have more influence than traditional advertising.
Despite this potential, concerns arise about the implications of AI’s persuasive capabilities. The technology remains largely unregulated, which raises the specter of tech companies potentially manipulating users for political objectives. Rand emphasized that if a powerful figure were to channel their agenda through AI—through platforms like OpenAI’s ChatGPT or Elon Musk’s Grok, for instance—there exists a real capacity for shaping public opinion in specific directions.
The impact of AI on shaping narratives is already evident. Kobi Hackenburg, a lead researcher, mentioned that chatbots can generate an array of seemingly plausible “facts,” making it challenging for users to distinguish reality from fiction. Unlike traditional social media platforms, which offer a cluttered mix of content, chatbots deliver tailored information directly, potentially skewing individuals’ beliefs.
This situation has driven discussions about the future of political communications, as AI-generated content becomes more prevalent in political campaigning. While acknowledging AI’s persuasive capabilities, experts caution against treating these developments as isolated incidents, as they might exacerbate existing divisions in public opinion.
Ultimately, discussions surrounding the persuasive prowess of AI may mask the broader narrative about its purpose: to align user interests with those of the tech corporations behind these innovations. As the integration of AI into everyday platforms continues, its capacity to shape public perception and influence electoral outcomes poses both opportunities and significant ethical dilemmas.

