×

AI deepfakes a top concern for election officials with voting underway

By Devin Dwyer and Sarah Herndon, ABC News Oct 18, 2024 | 6:07 AM
State election officials who will oversee voting for the November general election prepare for disruptions from artificial intelligence during a training session in Phoenix, Ariz.. Via ABC News

(PHOENIX) — In the final weeks of a divisive, high-stakes campaign season, state election officials in political battleground states say they are bracing for the unpredictable and emergent threat posed by artificial intelligence, or AI.

“The number one concern we have on Election Day are some of the challenges that we have yet to face,” Arizona Secretary of State Adrian Fontes said. “There are some uncertainties, particularly with generative artificial intelligence and the ways that those might be used.”

Fontes, a Democrat, said his office is aware that some campaigns are already using AI as a tool in his hotly contested state and that election administrators urgently need to familiarize themselves with what is real and what is not.

“We’re training all of our election officials, to make sure that they’re familiar with some of the weapons that might be deployed against them,” he said.

During a series of tabletop exercises conducted over the past six months, Arizona officials for the first time confronted hypothetical scenarios involving disruptions on Election Day on Nov. 5 created or facilitated by AI.

Some involved deepfake video and voice-cloning technology deployed by bad actors across social media in an attempt to dissuade people from voting, disrupt polling places, or confuse poll workers as they handle ballots.

In one fictional case, an AI-generated fake news headline published on Election Day said there had been shootings at polling places and that election officials had rescheduled the vote for Nov. 6.

“They walk us through those worst case scenarios so that we can be critically thinking, thinking on our toes,” said Gina Roberts, voter education director for the nonpartisan Arizona Citizens Clean Elections Commission and one of the participants in the exercise.

The tabletop exercise also studied recent real-world examples of AI being deployed to try to influence elections.

In January, an AI-generated robocall mimicking President Joe Biden’s voice was used to dissuade New Hampshire Democrats from voting in the primary. The Federal Communications Commission assessed a $6 million fine against the political consultant who made it.

In September, Taylor Swift revealed on Instagram that she went public to endorse Vice President Kamala Harris to, in part, refute an AI-generated deepfake image that falsely showed her endorsing Donald Trump.

There have also been high profile cases of foreign adversaries using AI to influence the campaign. OpenAI, the company behind ChatGPT, says it shut down a secret Iranian effort to use its tools to manipulate U.S. voter opinion.

The Justice Department has also said that Russia is actively using AI to feed political disinformation on to social media platforms.

“The primary targets of interest are going to be in swing states, and they’re going to be swing voters,” said Lucas Hanson, co-founder of CivAI, a nonprofit group tracking the use of A.I. in politics in order to educate the public.

“An even bigger [threat] potentially is trying to manipulate voter turnout, which in some ways is easier than trying to get people to actually change their mind,” Hanson said. “Whether or not that shows up in this particular election it’s hard to know for sure, but the technology is there.”

Federal authorities say that while the risks aren’t entirely new, AI is amplifying attacks on U.S. elections with “greater speed and sophistication” at lower costs.

“Those threats being supercharged by advanced technologies — the most disruptive of which is artificial intelligence,” Deputy Attorney General Lisa Monaco said last month.

In a bulletin to state election officials, the Department of Homeland Security warns that AI voice and video tools could be used to create fake election records; impersonate election staff to gain access to sensitive information; generate fake voter calls to overwhelm call centers; and more convincingly spread false information online.

Hanson says voters need to educate themselves on spotting AI attempts to influence their views.

“In images, at least for now, oftentimes if you look at the hands, then there’ll be the wrong number of fingers or there will be not enough appendages. For audio, a lot of times it still sounds relatively robotic. In particular, sometimes there will be these little stutters,” he said.

Social media companies and U.S. intelligence agencies say they are also tracking nefarious AI-driven influence campaigns and are prepared to alert voters about malicious deepfakes and disinformation.

But they can’t catch them all.

More than 3 in 4 Americans believe it’s likely AI will be used to affect the election outcome, according to an Elon University poll conducted in April 2024. Many voters in the same poll also said they’re worried they are not prepared to detect fake photos, video and audio on their own.

“In the long term, if you can see something that seems impossible and it also makes you really, really mad, then there’s a pretty good chance that that’s not real,” Hanson said. “So part of it is you have to learn to listen to your gut.”

In states like Arizona, which could decide a razor tight presidential race, the stakes are higher than ever.

“AI is just the new kid on the block,” Fontes said. “What exactly is going to happen? We’re not sure. We are doing our best preparing for everything except Godzilla. We’re preparing for about everything, because if Godzilla shows up, all bets are off.”

Copyright © 2024, ABC Audio. All rights reserved.