State Actions
AI is only the latest wrinkle in a long line of technological changes to impact state campaigns and elections. Candidates, campaign staff and election administrators have always adapted to new technologies, from political television ads in the mid-20th century to the recent rise of cryptocurrency contributions.
With the 2024 elections quickly approaching, state policymakers have been contemplating how to regulate this technology in a presidential election year. Only a few bills have been introduced addressing AI in election administration processes, most notably Arizona’s 2024 SB 1360, which would have prohibited machines used in elections from using AI. Political messaging bills have seen more activity in the years leading up to the 2024 election.
Since 2019, at least 19 states have enacted laws regulating AI’s use in political messaging. There have been three main considerations in AI political messaging legislation: How these bills should define AI, what regulatory provisions should be included, and what enforcement provisions should look like.
How States Are Defining AI in the Context of Campaigning and Political Messaging
AI is challenging to define due to its complexity and evolving nature. AI is composed of various computer algorithms that vary from program to program. As a result, there is no universally accepted definition of AI. States may use different terms such as synthetic media, deceptive media, deepfake or a variety of others that encompass the deceptive use of media to influence an election.
Some states have attempted to define “artificial intelligence” directly. Florida defines generative AI as “a machine-based system that can, for a given set of human-defined objectives, emulate the structure and characteristics of input data in order to generate derived synthetic content.” New Mexico defines AI as “a machine-based or computer-based system that through hardware or software uses input data to emulate the structure and characteristics of input data in order to generate synthetic content, including images, video or audio.”
Other states use definitions that may encompass AI without using that exact phrase. Minnesota defines a “deepfake” as an image, audio or video that’s production was “substantially dependent upon technical means” rather than an individual’s physical or verbal ability to impersonate someone.
In the political messaging context, states have used various terms to reference AI-generated content, such as “synthetic media,” “deepfake,” “materially deceptive media” and “doctored media.” These terms may have different definitions from state to state. For example, Wisconsin and Washington both use the term “synthetic media.” Wisconsin defines synthetic media as including audio and video, while Washington includes audio, video and images. Similarly, California and Michigan both use the term “materially deceptive media,” but only Michigan requires the media to have been generated by AI.
States’ definitions may cover more than just AI. California uses the term “materially deceptive media,” defining it as the confluence of two conditions: 1) a reasonable person would believe the media is authentic, and 2) a reasonable person would have a fundamentally different understanding of the media if it were unaltered. This definition makes no mention of technology and thus may apply to any misleading or false content, whether generated by AI or otherwise.
Idaho and Washington both define synthetic media as using “generative adversarial network techniques or other digital technology.” Generative adversarial networks (GANs) are one of several underlying models that AI uses to generate content, with diffusion models being another example. Because GANs are only one of many models, these definitions may not capture all AI-generated content.
Most states’ laws regulate AI by focusing on the medium by which AI is distributed: audio, images and/or video. Many states include all three media, but some only include one or two. Texas passed legislation in 2019 defining deepfakes as videos, notably leaving out other media. This has left the law’s applicability to audio and images unclear.
Provisions States Are Including to Regulate AI Use in Political Messaging
States have taken several legislative approaches to regulate AI in political messaging. However, no state has a complete ban on deceptive AI-generated political messaging, likely due to First Amendment concerns. New state laws establish durational prohibitions and disclosures, and some states have used current laws to address deceptive practices.
Though full prohibitions do not currently exist, two states only have durational prohibitions. These laws make it a crime to publish deepfakes intended to influence an election within a specified window of time. Minnesota prohibits the publication of deepfakes 90 days prior to an election, and Texas prohibits publishing a deepfake video 30 days prior to an election.
By far, the most common approach to regulation has been requiring disclosures. Most state campaign finance laws require disclosures on ads identifying the committee or person who funded it. Similarly, AI disclosures require content to include text stating it was generated by AI.
Disclosures and durational prohibitions can work together. Disclosure requirements often include durational prohibitions but allow the content to still be published so long as the disclosure is included. For example, Arizona prohibits deceptive AI-generated content 90 days prior to an election, unless the message includes a “clear and conspicuous disclosure” that the content was generated by AI. Washington, on the other hand, does not include a duration requirement, and disclosures must be included on political deepfakes published year-round.
Another approach is to include digitally embedded disclosures. These types of disclosures require information to be contained in a digital file’s metadata: descriptive information about a file’s creator, when the file was created, when the file was edited, etc. Colorado and Utah both require AI-generated content to contain metadata about the program that was used to create it, who created it when it was created and a disclosure stating it was created by AI. This requirement allows news and social media sites that have these images uploaded to their platforms—or any person who wants to—the ability to independently verify the media’s authenticity.
AI laws may not be needed in some states as their current laws may already cover unwanted conduct. In early 2024, New Hampshire residents received an AI-generated phone call that sounded like President Joe Biden telling voters not to vote. At the time, New Hampshire did not have an AI political messaging law, so the New Hampshire Attorney General’s Office instead charged the political consultant behind the calls with 13 counts of voter suppression and 13 counts of impersonation of a candidate. In August 2024, New Hampshire enacted HB 1596 to require the disclosure of deceptive AI usage in political advertising.
To avoid First Amendment violations, some states have built-in exceptions to the applicability of their laws. One example is New York, which exempts satire or parody. Another exception is for media providers. In Wisconsin, liability for damages is waived for broadcasters or online platforms so long as they didn’t create the content.
How States Enforce AI Policies and Assign Penalties
Most of the legislation on AI in political messaging includes some form of civil or criminal penalty for violating their AI political messaging laws. However, none of these laws have been widely tested in courts.
Many states have chosen to leave enforcement to civil courts. Candidates in most states can seek injunctive relief to prohibit the further publication of an AI-generated image of themselves. Some states, like Mississippi, have extended a cause of action for the “wrongful dissemination of digitizations” to other individuals who may have been falsely depicted, like lay people or other public officials.
Several states allow candidates to seek repayment of court costs and damages if they are either the subject of a frivolous lawsuit or prevail in court on the merits. Due to the increasing sophistication of AI content, people may bring lawsuits against real content, claiming it is fake. Alabama has attempted to remedy this by allowing a defendant to obtain court costs and attorney’s fees for frivolous lawsuits as well.
States may choose to impose civil penalties. New Mexico and Utah fine offenders $1,000 for each violation for disseminating materially deceptive media and synthetic media, respectively. Colorado’s civil penalty may include imposing a penalty of 10% of the dollar amount used to promote a deepfake.
About a third of the states that have enacted laws also have criminal penalties specific to their AI political messaging laws. Although more uncommon than other penalties, some states have punishments of prison time. For example, Texas’s law allows a maximum sentence of one year in prison for a violation. Some prison sentences are tied to specific conditions, such as in Minnesota, where second violations have longer maximum sentences. In Mississippi, violations intended to cause violence can be given a maximum of five years, whereas all other violations carry a maximum of one year.
Some states also have criminal fines that can be imposed with, or instead of, prison time. Michigan’s fine of $500 for the first violation is among the nation’s lowest, while Minnesota’s fine of $10,000 for the second violation is one of the largest.
Benefits and Risks of AI in Elections
AI is rapidly influencing elections in this country and worldwide. While technology offers considerable potential, it also poses risks, particularly in the form of disinformation, deepfakes and voter suppression. Leaders in federal and state governments are responding by outlining strategies and enacting new laws to mitigate harm from deceptive AI election content. As legislative leaders explore policy options, some considerations to keep in mind are:
- First Amendment implications.
- Types of media that are being regulated (audio, video or images).
- Types of penalties imposed for violations (civil or criminal).
- Federal agency or congressional preemptions.