AI could contribute to election misinformation with little federal government response,
By SHAUN CHORNOBROFF
WASHINGTON – As millions of Americans prepare to cast their ballots in the election, the federal government has done little to mitigate public fears surrounding the potential influence of artificial intelligence (AI) deepfakes.
Despite multiple bipartisan bills being introduced in both the Senate and the House, Congress has yet to enact any legislation and is unlikely to do so before the election.
“Those tools are being used to mislead voters about elections and spread falsehoods about candidates,” Sen. Richard Blumenthal, D-Connecticut, said on Tuesday during an AI-focused hearing by the Senate Judiciary Committee’s privacy, technology, and the law subcommittee. Blumenthal chairs the panel.
An Elon University survey in May found that more than three-quarters of Americans believe AI abuses will affect the outcome of the presidential election.
Capital News Service talked to multiple experts in AI and politics who explained that, while there are concerns surrounding AI, the evolving technology changing the outcome of elections is not among them.
“Will AI potentially convince some voters to vote for one candidate instead of another? Maybe,” said Dr. Keegan McBride, an AI, government and policy expert at the Oxford Internet Institute at the University of Oxford. “Will they be able to convince a couple hundred thousand voters in Pennsylvania that they should vote for candidate A instead of Candidate B? Probably not.”
AI has already made its presence known in the 2024 presidential election cycle.
In June 2023, a video shared by the campaign team for one-time Republican presidential candidate and Florida Gov. Ron DeSantis showed a fake photo of former President Donald Trump hugging Dr. Anthony Fauci, the former director of the National Institute of Allergy and Infectious Diseases and former chief medical advisor to the president.
Trump also posted AI images on Truth Social in August of singer Taylor Swift and her fans supposedly endorsing him. Swift endorsed Democratic nominee Kamala Harris after the Sept. 10 debate between Harris and Trump.
Most notably, robocalls in New Hampshire impersonated President Joe Biden and discouraged citizens from voting in the state’s January primary. Steven Kramer, a political consultant, admitted to masterminding the plan and is facing 26 criminal charges, as well as a $6 million fine from the Federal Communications Commission (FCC).
The agency ruled on Feb. 8 that AI-generated voices used in robocalls are illegal.
“We’re putting the fraudsters behind these robocalls on notice,” said FCC Chairwoman Jessica Rosenworcel.
While these instances drew widespread coverage, Dr. Cody Buntain, an assistant professor at the University of Maryland’s College of Information, said the examples the public has seen thus far haven’t had the intended effect.
“They’re notable because they weren’t convincing and no one bought them,” Buntain said. “Very quickly the country and the news media jumped on them, said ‘this is totally not legit,’ and we’ve sort of all agreed with that.”
While swaying the results of the election was not a prevalent concern with any of the experts, each said AI has the potential to fuel further public distrust of government and the political system.
Peter Loge, the founding director of the Project on Ethics in Political Communication at The George Washington University’s School of Media and Public Affairs, has more than three decades of experience in politics. When he first started out in the industry, campaigns were cutting and pasting images. That changed when Photoshop became popular. “It was cutting and pasting, but way better and faster,” said Loge.
Generative AI is the latest evolution, he said.
“It’s easier to generate garbage than ever before, and it’s easier to circulate garbage than ever before,” Loge said. “Making up stuff about your opponents, making up news stories, doctoring photographs, deepfakes – lies aren’t new.”
For decades, trust in the federal government has been eroding. In 1964, 77% of people said they trusted the government, according to the Pew Research Center. Sixty years later, that number sits at 22%.
The inability to distinguish truth from generative AI has the potential to exacerbate this trend, multiple experts said.
On Aug. 7, Harris and her running mate, Minnesota Gov. Tim Walz, held a rally in Detroit, Michigan, where a photo of an estimated 15,000-person crowd circulated on social media. Four days later, Trump said on Truth Social that the photos were AI-generated.
The photo was authentic, but Loge believes that Trump’s false AI claim represents a major problem.
“You get to claim that the real is fake because everybody believes that everything’s being faked anyway, and that increases cynicism,” Loge said. ”It decreases confidence in campaigns and democracy, all of which is very bad.”
In lieu of federal legislation regarding AI and the election, individual states have sought to address the matter. At least 19 states have passed laws since 2019 regulating the use of AI in political messaging, according to the National Conference of State Legislatures.
In Maryland, six state lawmakers in February proposed legislation requiring campaigns to disclose the use of artificial intelligence for political materials.
The bill, which was similar to those passed in a number of other states, saw no action in this year’s legislative session.
“We need a disclosure if there’s an image. We need a disclosure if there’s media. We need a disclosure if there’s audio,” Del. Anne Kaiser, D-Montgomery County, said during a Feb. 27 committee hearing. “This bill would codify Maryland’s election law.”
With the Congress scheduled to go on an election break next week, it’s becoming increasingly likely that there will be more consequence-free AI ads and deepfakes before Election Day.
House Speaker Mike Johnson, R-Louisiana, told Axios on Thursday that he wants Congress to adopt a laissez-faire approach to AI for the time being.
“We got to build consensus on what the right approach is, but I think you’ll see a lot of emphasis on that in the first quarter of next year,” Johnson said.
While AI is unlikely to influence the result of the election, Buntain did offer a potential doomsday scenario come November if the vote tally is close.
“If people are uncertain about the outcome of the election, and there’s a certain cohort of the population who already don’t have a lot of trust in the electoral process,” he said, “then the use of AI In particular manipulated imagery has a much more risky potential to at least get people out into the streets and stir up feelings of anger and resentment and mobilize people.”
Capital News Service is a student-powered news organization run by the University of Maryland Philip Merrill College of Journalism. With bureaus in Annapolis and Washington run by professional journalists with decades of experience, they deliver news in multiple formats via partner news organizations and a destination Website.