The United States and other countries have seen a surge in foreign and domestic actors attempting to influence electoral outcomes in recent years. That’s old news. The new question on policymakers’ minds is whether the recent explosion in generative artificial intelligence will impact campaigning in 2024.
There are many ways AI may negatively affect the electoral process, including voter misinformation by chatbots and phishing scams on election officials through AI-generated voices. But it’s the effects of deepfakes—manipulated videos and images created by AI—that have been the center of concern for most. Deepfakes have given bad actors another tool to deceive voters and damage political rivals.
“AI’s ability to spread misinformation has been inherited from social media.”
—Nate Persily, Freeman Spogli senior fellow, Stanford University
Although misinformation is as old as time, generative AI poses new challenges and threats to campaigns. NCSL spoke with experts and legislators to better understand the problems AI poses for campaigns and the challenges involved in legislating it: Nate Persily, professor of law and Freeman Spogli senior fellow at Stanford; Ethan Bueno de Mesquita, Sydney Stein professor and interim dean at the University of Chicago; and Kansas Rep. Pat Proctor (R).
The trouble deepfakes cause is not new; people have been doing the same with “shallow fakes” for a long time, Persily says. Shallow fakes involve the use of traditional editing software, such as Photoshop, to deceive people into thinking a piece of altered media is authentic.
One example of a shallow fake is a 2020 video of Rep. Nancy Pelosi seemingly slurring her words. The video, created by manipulating its audio in traditional editing software, led some viewers to question her fitness for office.
There are many unknowns when it comes to generative AI, but some things are already becoming clear. Several countries, including Turkey and Argentina, have held elections in which AI played a role in campaigns.
AI’s Effect on Campaigns
Bueno de Mesquita puts the current uses of AI into two camps: those to deceive people and those to generate publicity through its novel use and shock value. He says the latter is a novelty that will fade away but believes that deception is here to stay.
As for the first camp, Bueno de Mesquita says AI-generated deception may result in at least two problems, the main one being the spread of misinformation. This was seen in Turkey when a deepfake video showed presidential candidate Kemal Kilicdaroglu clapping alongside a member of the Kurdistan Workers’ Party, a Turkish political militant group that the European Union and several countries have designated as a terrorist organization.
The second problem, he says, is that over time, AI may also erode trust in authentic information. “Widespread circulation of manufactured content may undermine voters’ trust in the broader information environment. If voters come to believe that they cannot trust any digital evidence, it becomes difficult to seriously evaluate those who seek to represent them,” Bueno de Mesquita writes in a white paper he co-authored.
Persily adds that generative AI will make up only a small percentage of what people see online but will lead to skepticism of all other content.
This manifested itself in Turkey with the release of a sex tape allegedly involving presidential candidate Muharrem Ince. In response, he claimed the video was a deepfake, introducing public doubt in its authenticity.
The U.S. is also starting to see how generative AI will affect campaigning. The issue was brought to Proctor’s attention by his wife, who saw a video on Facebook falsely purporting to be from CNN.
Despite these tangible examples, it’s hard to say how much harm generative AI will cause. Bueno de Mesquita says there is little research on misinformation’s effect on voting as it is hard to quantify. “There is good reason to believe that AI will make misinformation worse—although we don’t have a good handle on how much worse,” he says.
Legislating AI in Campaigns
Every AI campaign bill enacted in 2023 received some amount of cross-party support. Proctor says that political will exists to take bipartisan action in Kansas and that Republicans and Democrats in the Sunflower State are actively working on legislation.
To date, six states have enacted policies regulating generative AI’s use in campaigns. One approach has been prohibiting the technology’s use in elections and campaigns altogether. These laws restrict the publication of generative AI content a number of months prior to polls opening. Another approach has been to require disclaimers on content generated by AI, similar to disclosures of who pays for political ads.
Digital signatures could be another policy option, Bueno de Mesquita says. Digital signatures involve either putting information in metadata—descriptive data embedded in a file—or putting a physical watermark on an image or video. This identifies where an image originated, whether from a publication site or an AI image generator. Bueno de Mesquita points out that digital signatures would require a cultural shift in how online users verify the authenticity of the content they see online—something that is currently uncommon.
Challenges in Creating Legislation
When the COVID pandemic occurred, states needed to quickly enact legislation on health, emergency powers and election administration, with few prior analogs to guide them. In a similar vein, generative AI emerged quickly and with few precedents for policymakers to refer to.
Despite the challenges, many legislators are breaking new ground on policy. Proctor says policy must be implemented in a way that safeguards the First Amendment’s right to free speech. He notes there could be legitimate reasons for using AI—for example, as a tool for creating political satire.
One challenge legislators might face is defining AI, Persily says. He compares this to difficulties in campaign finance law when defining a term like “express advocacy.” Lawmakers want to capture as much as they can in a single term while leaving out specific cases.
Another challenge is determining the threshold for when AI use triggers a legal restriction. Everything from cellphone camera software to text autocorrection have used AI technologies for some time. “Clearly these laws aren’t meant to prohibit changing the lighting of a photo; it’s about the nefarious uses, like to say an event never happened,” Persily says.
It’s important to keep in mind that the problems associated with generative AI are not solely caused by it, Persily says. “AI’s ability to spread misinformation has been inherited from social media.” Social media’s purpose is to disseminate information, including AI generated content.
Persily says that social media creates additional challenges as it is easier for governments to regulate ads than organic content. When a political campaign does something, governments can quickly identify who is at fault and intervene. On the internet, the origins of a post quickly get lost as people unknowingly repost false information.
What’s Next?
It is hard to say how generative AI might affect the 2024 election given the small number of campaigns in which this rapidly developing technology has played a role. To follow AI election and campaigning legislation, visit NCSL’s page Artificial Intelligence (AI) in Elections and Campaigns.
Adam Kuckuk is a policy analyst in NCSL’s Elections and Redistricting Program.