AI’s Underwhelming Impact on the 2024 Elections

8 minute read

Early this year, watchdogs and technologists warned that artificial intelligence would sow chaos into the 2024 U.S. elections, spreading misinformation via deepfakes and personalized political advertising campaigns. Those fears spread to the public: More than half of U.S. adults are “extremely or very concerned” about AI’s negative impacts on the election, according to a recent Pew poll.  

Yet with the election one week away, fears of the election being derailed or defined by AI now appear to have been overblown. Political deepfakes have been shared across social media, but have been just a small part of larger misinformation campaigns. The U.S. Intelligence Community wrote in September that while foreign actors like Russia were using generative AI to “improve and accelerate” attempts to influence voters, the tools did not “revolutionize such operations.” 

Tech insiders acknowledge 2024 was not a breakthrough year for generative AI in politics. “There are a lot of campaigns and organizations using AI in some way or another. But in my view, it did not reach the level of impact that people anticipated or feared,” says Betsy Hoover, the founder of Higher Ground Labs, a venture fund that invests in political technology. 

At the same time, researchers warn that the impacts of generative AI on this election cycle have yet to be fully understood, especially because of their deployment on private messaging platforms. They also contend that even if the impact of AI on this campaign seems underwhelming, it is likely to balloon in coming elections as the technology improves and its usage grows among the general public and political operatives. “I’m sure in another year or two the AI models will get better,” says Sunny Gandhi, the vice president of political affairs at Encode Justice. “So I’m pretty worried about what it will look like in 2026 and definitely 2028.”

The Rise of Political Deepfakes

Generative AI has already had a clear impact on global politics. In countries across South Asia, candidates used artificial intelligence to flood the public with articles, images and video deepfakes. In February, an audio deepfake was disseminated that falsely purported to depict London Mayor Sadiq Khan making inflammatory comments before a major pro-Palestinian march. Khan says that the audio clip inflamed violent clashes between protestors and counter-protestors. 

There have been examples in the U.S. too. In February, New Hampshire residents received voicemails from an audio deepfake of Joe Biden, in which the President appeared to discourage them from voting. The FCC promptly banned robocalls containing AI-generated voices, and the Democratic political consultant who created the voicemails was indicted on criminal charges, sending a strong warning to others who might try similar tactics. 

Still, political deepfakes were elevated by politicians, including former President Donald Trump. In August, Trump posted AI images of Taylor Swift endorsing him, as well as Kamala Harris communist garb. In September, a video that was linked back to a Russian disinformation campaign accused Harris of being involved in a hit-and-run accident, and was seen on social media millions of times.

Read More: There’s Another Important Message in Taylor Swift’s Harris Endorsement.

Russia has been a particular hotbed for malicious uses of AI, with state actors generating text, images, audio, and video it has put to use in the U.S., often to amplify fears around immigration. It’s unclear whether these campaigns have had much of an impact on voters. The Justice Department said it disrupted one of those campaigns, known as Doppelganger, in September. The U.S. Intelligence Community wrote the same month that these foreign actors faced several challenges in spreading these videos, including the need to “overcome restrictions built into many AI tools.” 

Independent researchers have also worked to track the spread and impact of AI creations. Early this year, a group of researchers at Purdue created an incidents database of political deepfakes, which has since since logged more than 500 incidents. Surprisingly, a majority of those videos have not been created to deceive people, but rather are satire, education, or political commentary, says researcher Christina Walker. However, Walker says that these video’s meanings to viewers often change as they spread across different political circles. “One person posts a deepfake and writes, ‘This is a deepfake. I created it to show X, Y and Z.’ Twenty retweets later, someone else is sharing it as if it's real,” Walker says. 

Daniel Schiff, another researcher on the project, says many deepfakes are likely designed to reinforce the opinions of people who were already predisposed to believe their messaging. Other studies suggest that most forms of political persuasion have very small effects at best—and that voters actively dislike political messages that are personally tailored to them. That might render moot one of AI’s primary powers: to create targeted messages cheaply. In August, Meta reported that generative AI-driven tactics have provided “only incremental productivity and content-generation gains” to influence campaigns. The company concluded that the tech industry’s strategies to neutralize their spread “appear effective at this time.” 

Other researchers are less confident. Mia Hoffmann, a research fellow at Georgetown’s Center for Security and Emerging Technology, says it’s difficult to ascertain AI’s influence on voters for several reasons. One is that major tech companies have limited the amount of data they share about posts. Twitter ended free access to its API, and Meta recently shut down Crowdtangle on Facebook and Instagram, making it harder for researchers to track hate speech and misinformation across those platforms. “We’re at the mercy of what these companies share with us,” Hoffmann says. 

Hoffmann also worries that AI-created misinformation is proliferating on closed messaging platforms like WhatsApp, which are especially popular with diasporic immigrant communities in the U.S. It’s possible that robust AI efforts are being deployed to influence voters in swing states, but that we may not know about their effectiveness until after the election, she adds. “As the electoral importance of these groups has grown, they are increasingly being targeted with tailored influence campaigns that aim to suppress their votes and sway their opinions,” Hoffmann says. “And because of the encryption of the apps, misinformation is more hidden from fact-checking efforts.” 

AI Tools in Political Campaigns

Other political actors are attempting to wield generative AI tools in more mundane ways. Campaigns can use AI tools to trawl the web to see how a candidate is being perceived in different social and economic circles, conduct opposition research, summarize dozens of news articles, and write social media copy tailored to different audiences. Many campaigns are short-staffed, have tight budgets and are short on time. AI, the theory goes, could replace some of the low-level work typically done by interns. 

A spokesperson for the Democratic National Committee told TIME that members of the organization were using generative AI to make their work “more efficient while maintaining strong safeguards”—including to help officials draft fundraising emails, write code, and spot unusual patterns of voter removals in public data records. A spokesperson for the Republican National Committee did not respond to a request for comment.  

A variety of startups have started offering AI tools for political campaigns. They include BattleGroundAI, which can write copy for hundreds of political ads “in minutes,” the company says, as well as Grow Progress, which runs a chatbot tool that helps people generate and tweak persuasion tactics and messages to potential voters. Josh Berezin, a co-founder at Grow Progress, says that dozens of campaigns have “experimented” with their chatbot this year to create ads.

But Berezin says that the adoption of those AI tools has been slow. Political campaigns are often risk-averse, and many strategists have been hesitant to jump in, especially given the public’s negative perception of the use of generative AI in politics. The New York Times reported in August that only a handful of candidates were using AI—and several of those that did employ tech wanted to hide the fact from the public. “If someone was saying, ‘This is the AI election,’ I haven't really seen that,” Berezin says. “I've seen a few people explore using some of these new tools with a lot of gusto, but it's not universal.”

However, it’s likely that the role of generative AI will only expand in future elections. Improved technology will allow campaigns to create messaging and fundraise more quickly and inexpensively. AI could also aid the bureaucracy of vote processing. Automated signature verification—in which a mail voter’s signature is matched with their signature on file—was used in several counties in 2020, for instance.

But improved AI technology will also lead to more believable deepfake video and audio clips, likely leading both to the spread of disinformation and an increasing distrust in all political messaging and its veracity. “This is a threat that's going to be increasing,” says Hoffmann, the Georgetown researcher. “Debunking and identifying these influence campaigns is going to become even more time-consuming and resource-consuming.”

More Must-Reads from TIME

Contact us at [email protected]