,

The Impact of AI-Generated Fake News in Past Elections

The misuse of AI-generated content has extended far beyond entertainment, emerging as a powerful tool for electoral manipulation.

The Impact of AI-Generated Fake News in Past Elections

Aubrey Rademacher

As artificial intelligence (AI) technology advances, so does its potential to influence elections by spreading disinformation. The growing presence of AI-generated fake news and deep fakes is a significant concern for democratic processes, as these technologies can manipulate voters, erode trust in institutions, and ultimately impact election outcomes.

The Quantum Leap in Election Disinformation

The use of AI in generating disinformation has drastically evolved in recent years, making it easier for actors to create convincing fake content. Previously, fabricating false images, videos, or audio required a high level of technical skill; but today, AI platforms like Google’s and OpenAI’s tools allow anyone to produce sophisticated “deep fakes” with minimal effort. 

These AI-generated fakes are not limited to entertainment but are now being weaponized to sway voters by disseminating false narratives about political candidates, policies, and even election logistics. In the 2024 U.S. presidential race, for instance, AI was used to create a robocall mimicking President Biden’s voice, urging voters to abstain from participating in the New Hampshire primary. This incident highlighted how AI can be misused to suppress voter turnout. Similarly, in Moldova and Slovakia, AI-generated deepfakes have been used to manipulate voters’ perceptions of political candidates, further amplifying the global threat posed by this technology.

The threat of AI-generated election disinformation extends far beyond the U.S. In countries like Mexico and India, AI-generated audio and video content has already appeared, often targeting key political figures to discredit them or confuse voters.

The rapid development of these AI tools has made it increasingly difficult for the public to distinguish between real and fake content. As Srishti Jaswal, a journalist in India, noted, AI has become a tool for targeting marginalized groups, especially women, adding an additional layer of social harm beyond its political misuse.


Platforms like YouTube, Meta, and TikTok have begun implementing measures requiring users to disclose when content is AI-generated. However, the effectiveness of these measures remains limited, and many countries are lagging in legislative responses to the threat. In Europe, the Digital Services Act requires tech companies to assess the risks posed by AI to society, including during elections, but this approach is still in its infancy.

The image above was actually created by students at USC price.

While AI can facilitate beneficial innovations, it has also led to numerous public blunders that illustrate its potential for harm. One such example was the misuse of AI by content farms, which publish fake news articles on websites that appear legitimate. These sites operate with little to no human oversight, often relying on programmatic advertising for revenue.

A key problem with AI-generated misinformation is the economic model driving it. With brands inadvertently placing ads on untrustworthy websites, these sites are incentivized to produce more false content. This has led to a proliferation of misinformation, ranging from fabricated political events to conspiracy theories about global conflicts.

The Road Ahead: What Can be Done?

As the 2024 elections approach, the pressure is mounting on lawmakers and tech companies to curb the spread of AI-generated misinformation. Some states in the U.S. have introduced bills requiring disclaimers on AI-generated political ads and media, while others are exploring bans on AI deepfakes in the run-up to elections. On the corporate side, Meta and other platforms are collaborating with AI developers to create standards for labeling AI-generated content.

At the international level, the European Union’s approach to regulating AI offers a potential blueprint for addressing the global impact of AI on elections. By requiring independent audits of the risks posed by AI technologies, the EU is laying the groundwork for more comprehensive regulation. However, as Hetrick (2024) notes, the U.S. still lacks specific guidance tailored to election officials on how to manage AI-related disinformation threats.

Adami, M. (2024, March 15). How AI-generated disinformation might impact this year’s elections and how journalists should report on it. Reuters. https://reutersinstitute.politics.ox.ac.uk/news/how-ai-generated-disinformation-might-impact-years-elections-and-how-journalists-should-report

Bond, S. (2024, February 8). AI fakes raise election risks as lawmakers and tech companies scramble to catch up. NPR. https://www.npr.org/2024/02/08/1229641751/ai-deepfakes-election-risks-lawmakers-tech-companies-artificial-intelligence

Hetrick, C. (2024, July 2). How to spot AI fake news – and what policymakers can do to help. USC Price. https://priceschool.usc.edu/news/ai-election-disinformation-biden-california-europe/

Swenson, A., & Chan, K. (2024, March 14). Election disinformation takes a big leap with AI being used to deceive worldwide. AP News. https://apnews.com/article/artificial-intelligence-elections-disinformation-chatgpt-bc283e7426402f0b4baa7df280a4c3fd

Scroll to Top