How are Some States Currently Regulating AI?
While no federal legislation enactments focusing on protecting people from the potential harms of AI and other automated systems appear imminent, states are moving ahead to address potential harms from these technologies. Bipartisan efforts in state legislatures seek to balance stronger protections for citizens while enabling innovation and commercial use of AI.
There is focus on the oversight of the impact of the algorithm rather than on the specific tool itself, which allows for innovation to flourish while necessary protections remain future-proofed. According to a recent Brookings Institute analysis on state AI activity, the term “artificial intelligence” is a catchall notion that motivates legislative action, but legislators are actually eschewing the term when defining the scope of oversight and are “focusing instead on critical processes that are being performed or influenced by an algorithm.” According to the research, state governments are including any type of algorithm used for the covered process and are concerned with the impact on people’s civil rights, opportunities for advancement and access to critical services.
Building data transparency is another area state legislators are focusing on. When using algorithms for important decisions, Brookings states businesses and governments should explicitly inform affected persons. Some states have also proposed making algorithmic impact assessments public to allow for another layer of accountability and governance. Brookings agrees that “requiring public disclosure about which automated tools are implicated in important decisions” is critical to enable effective governance and engender public trust. In their view, “states could require registration of such systems and further ask for more systemic information, such as details about how algorithms were used, as well as results from a system evaluation and bias assessment.”
In the 2019 legislative session, bills and resolutions dealing specifically with artificial intelligence were introduced in at least 20 states, and measures were enacted or adopted in Alabama, California, Delaware, Hawaii, Idaho, Illinois, New York, Texas and Vermont. Many of the measures proposed to create task forces or studies.
California enacted legislation regulating pretrial risk assessment tools to require each pretrial services agency that uses a pretrial risk assessment tool to validate the tool by Jan. 1, 2021, and on a regular basis at least once every three years, and to make information regarding the tool, including validation studies, publicly available.
The Illinois General Assembly enacted the Artificial Intelligence Video Interview Act. The act provides that employers must notify applicants before a videotaped interview that artificial intelligence may be used to analyze the interview and consider the applicant's fitness for the position. Employers also must provide each applicant with information before the interview explaining how artificial intelligence will be applied and what general types of characteristics it uses to evaluate applicants. Before the interview, employers must obtain consent from the applicant to be evaluated by the artificial intelligence program. Employers also may not share applicant videos unnecessarily and they must delete an applicant’s interview upon request of the applicant.
Enacted in 2018, California’s Bolstering Online Transparency Act went into effect in 2019. This law makes it unlawful for any person to use a bot to communicate or interact online with another person in California with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication to incentivize a purchase or sale of goods or services or to influence an election.
During the 2020 state legislative session, at least 15 states introduced artificial intelligence bills and resolutions, with legislation enacted in Massachusetts and Utah. Massachusetts created a special commission to study the impact of automation, AI, global trade, access to new forms of data and internet of things (IoT) on the workforce, businesses and economy with the main objective to ensure sustainable jobs, fair benefits and workplace safety standards.
The Utah legislation created a deep technology talent initiative within higher education. "Deep technology" is defined as technology that leads to new products and innovations based on scientific discovery or meaningful engineering innovation, including those related to artificial intelligence.
Artificial intelligence bills and resolutions were introduced in at least 17 states in the 2021 legislative session, and enacted in Alabama, Colorado, and Mississippi. Alabama established the Alabama Council on Advanced Technology and Artificial Intelligence to review and advise the governor, the Legislature, and other interested parties on the use and development of advanced technology and artificial intelligence in this state. Colorado enacted legislation prohibiting insurers from using any external consumer data and information sources, as well as any algorithms or predictive models that use external consumer data and information sources in a way that unfairly discriminates based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.
Illinois amended the 2019 Artificial Intelligence Video Interview Act requiring employers that rely solely upon artificial intelligence to determine whether an applicant will qualify for an in-person interview must gather and report certain demographic information to the Department of Commerce and Economic Opportunity, and requiring the department to analyze the data and report to the governor and General Assembly whether the data discloses a racial bias in the use of artificial intelligence.
Mississippi directed the State Department of Education to implement K-12 computer science curriculum to include instruction in robotics, artificial intelligence, and machine learning.
In the 2022 legislative session, artificial intelligence bills and resolutions were introduced in at least 17 states, and enacted in Colorado, Florida, Idaho, Maine, Maryland, Vermont and Washington.
Vermont created the Division of Artificial Intelligence within the state Agency of Digital Services to review all aspects of AI developed, employed or procured by the state. The legislation required the Division of Artificial Intelligence to propose a state code of ethics on the use of AI and required the Agency of Digital Services to conduct an inventory of all the automated decision systems developed, employed or procured by the state.
Washington provided funding for the office of the chief information officer to convene a work group to examine how automated decision-making systems can be reviewed and periodically audited to ensure they are fair, transparent and accountable in 2021. The legislation was amended in 2022, requiring the chief information officer to prepare and make publicly available on its website an initial inventory of all automated decision systems being used by state agencies.
The 2023 legislative session is seeing an uptick in state legislative action with at least 25 states, Puerto Rico and the District of Columbia introducing artificial intelligence bills, and 14 states and Puerto Rico adopting resolutions or enacting legislation.
Connecticut requires the state Department of Administrative Services to conduct an inventory of all systems that employ artificial intelligence and are in use by any state agency. Beginning on Feb. 1, 2024, the department shall perform ongoing assessments of systems that employ artificial intelligence and are in use by state agencies to ensure that no such system shall result in any unlawful discrimination or disparate impact. Further, the Connecticut legislation requires the Office of Policy and Management to develop and establish policies and procedures concerning the development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence and are in use by state agencies.
Louisiana adopted a resolution requesting the Joint Committee on Technology and Cybersecurity to study the impact of artificial intelligence in operations, procurement, and policy. Maryland established the Industry 4.0 Technology Grant Program in the Department of Commerce to provide grants to certain small and medium-sized manufacturing enterprises to assist those manufacturers with implementing new industry 4.0 technology or related infrastructure. The definition of industry 4.0 includes AI.
North Dakota enacted legislation defining a person as an individual, organization, government, political subdivision, or government agency or instrumentality; providing that the term does not include environmental elements, artificial intelligence, an animal or an inanimate object. Texas created an AI advisory council to study and monitor artificial intelligence systems developed, employed, or procured by state agencies, with North Dakota, Puerto Rico and West Virginia also creating similar studies.