State Action
Just as the federal government is using AI, state governments are using AI for government operations and to provide service to constituents. State legislatures, governors and state agencies have considered various means to study and drive the use of AI for improving and transforming government services and identifying its potential risks.
During the 2024 legislative session, state legislators considered over 150 bills relating to government use of AI, addressing inventories to track the use of AI, impact assessments, creating AI use guidelines, procurement standards and government oversight bodies. Governors in over 10 states including Alabama, Maryland, Massachusetts, Oklahoma, Oregon and Washington, D.C. have issued executive orders to study AI use in running government operations and providing government services and benefits.
Inventories and Impact Assessments
At least 10 states, including Connecticut, Delaware, Maryland, Vermont and West Virginia, have instructed state agencies to inventory and describe AI applications within their operations and that impact the services they deliver. Notable enactments include:
- In 2022, Vermont enacted legislation creating the Division of Artificial Intelligence within the Agency of Digital Services to review all aspects of AI developed, employed or procured by the state. The law requires the agency to conduct an inventory of all automated decision systems. Two inventories are publicly listed for 2023 and 2024.
- Washington enacted legislation directing the state chief information officer to prepare and make publicly available on its website an initial inventory of all automated decision systems being used by state agencies in 2022. In 2023, according to WaTech's inventory of automated decision systems, there were 8,379 applications and 129 of them were identified as an automated decision system.
- Texas enacted a law in 2023 that requires a newly created Texas AI Advisory Council to review automated decision system inventory reports created by state agencies. The guidance advises state agencies to not include items in the inventory where AI tools are embedded in common commercial products like spam filters or spell checkers.
- In 2024, Delaware and Idaho created a commission and a council to provide recommendations for statewide processes and guidelines, including overseeing required inventories.
To address concerns about possible bias, discrimination and disparate impact, states like Connecticut, Maryland, Vermont, Virginia and Washington mandated that state agencies run impact assessments to ensure that the AI systems in use are ethical, trustworthy and beneficial. State impact assessment requirements vary among states, including:
- California's 2023 Executive Order directs that states agencies draft a report to examine and explain potential risks associated with generative AI to individuals, communities and government and state government workers, focusing on high-risk use cases, including when generative AI is used to make a consequential decision affecting access to essential goods and services. The order also requires several state agencies to conduct a joint risk analysis of potential threats to and vulnerabilities of California's critical energy infrastructure presented by generative AI.
- In 2023, Connecticut enacted a law that requires an annual inventory of all systems that employ artificial intelligence and requires an impact assessment before deployment to ensure the system will not result in any unlawful discrimination or disparate impact. Through these assessments, systems will be categorized into risk tiers based on potential risks. Connecticut's AI Responsible Use Framework incorporates three different impact assessment templates including the Canadian government's algorithmic impact assessment tool. The framework specifies that if a state agency uses any AI tools when creating content or agency external-facing services, then the agency shall disclose the use of AI and what bias testing was done.
- Maryland enacted a law in 2024 requiring each unit of state government to conduct inventories of systems employing high-risk AI and conduct impact assessments.
- New York also passed a law in 2024, which is awaiting the governor's signature, specifying that state government cannot use automated decision-making systems without continued, operational and meaningful human review. An impact assessment is required before use is permitted to understand the purpose of the system; the design and data used to train the model; and, to test for accuracy, fairness, bias and discrimination, among other potential impacts.
Guidance and Oversight for Government AI Use
Minnesota's Transparent Artificial Intelligence Governance Alliance identified that AI use in government presents opportunities such as an enhanced quality of life; increased efficiency; equitable and inclusive access to services; proactive and personalized government services; an empowered workforce; transparency and trust; innovative economic growth; data-driven decision making; and improved education.
Georgia's AI Responsible Use guidance specifies that misuse of AI by state agencies can happen through AI-based fraud, discrimination, invasion of privacy, malicious use and spreading misinformation. The same guidance warns that unintentional misuse can happen in cases of bias and discrimination, privacy violations, inaccurate or misleading information, inappropriate context, or an over reliance on AI.
Guidance and reports coming out across states highlight similar opportunities and areas of concern. At least 30 states have issued guidance on state agency use through governor executive orders, agency collaboration, rulemaking and state legislation. Most state legislatures have enacted legislation setting forth specific requirements for AI use by state government or directing another entity to establish these guidelines.
States vary in how centralized or decentralized they are in their management of information technology resources across their state agencies, so the state entities tasked with analyzing and setting guidelines may fall to statewide CIOs, information technology agencies, operations and administration agencies or individual information technology personnel based in other agencies. Other states are discussing if they should create new positions to do this work. The Oklahoma Governor's Task Force on Emerging Technologies recommended establishing a CAIO. Rhode Island is creating a single data governance structure and a new chief data officer position.
State legislatures also have established offices and other authorities to oversee AI implementation and make recommendations. Vermont's newly established Division of Artificial Intelligence within the Agency of Digital Services is charged with reviewing all aspects of artificial intelligence systems developed, employed, or procured in its state government. The division must review AI systems developed, employed, or procured in the Vermont state government, propose a state code of ethics for AI use in government to be updated annually and make recommendations to the General Assembly on policies, laws, and regulations for AI systems in the state government. The division is required to file reports to the General Assembly on or before Jan. 15 each year. The legislation established the Artificial Intelligence Advisory Council to provide advice and counsel to the director of the Division of Artificial Intelligence regarding the division's responsibilities to review all aspects of AI systems use by the state and engage in public outreach and education on AI.
Florida created the Government Technology Modernization Council in 2024 to be an advisory council within the Department of Management Services in 2024. The council will study and monitor the development and deployment of new technologies and provide reports on recommendations for procurement and regulation of such systems to the governor, the president of the Senate, and the speaker of the House of Representatives. Meeting quarterly, the council will recommend legislative and administrative actions that the Legislature and state agencies may take to promote the development of data modernization in the state, assess and provide guidance on any necessary legislative reforms and the creation of a state code of ethics for artificial intelligence systems in state government and assess the manner in which governmental entities and the private sector are using AI with a focus on opportunity areas for deployments in systems across this state, among other duties.
At least one quarterly meeting of the council must be a joint meeting with the Florida Cybersecurity Advisory Council. The council must submit any legislative recommendations to modernize government technology, including accelerating adoption of technologies to increase productivity of state enterprise information technology systems, improve customer service levels of government, and reduce administrative or operating costs annually.
In 2024, Maryland established a governor's Artificial Intelligence Subcabinet within the governor's Executive Council to facilitate and enhance cooperation among units of state government, in consultation with academic institutions and industries using AI. The subcabinet is tasked with developing strategy, policy and monitoring processes for responsible and productive use of AI and associated data by units of the state government, overseeing the state's implementation of its AI inventory, supporting AI and data innovation across state government and developing and implementing a comprehensive action plan for responsible and productive use of AI and associated data by the Maryland state government.
Other examples include Utah's Office of Artificial Intelligence Policy and Hawaii's state Data Office. The data office, with the state Data Task Force, is leading work focused on the responsible use of data and AI. In its advisory action plan, the Wisconsin Governor's Task Force on Workforce and Artificial Intelligence recommended creating an Office of Data and Privacy under the Department of Administration tasked with developing and implementing a strategy and governance structure supportive of AI because no single office or division in state government is tasked with data governance.
Principles Within State AI Guidelines
Common elements of state guidelines include specifying roles and responsibilities, guiding principles, new processes, inventory requirements and impact assessments. Some states have required working groups to suggest policies for internal government adoption and others have mandated certain requirements be added to procurement procedures for new equipment. Some states have created a new code of ethics; others have aligned with evolving international and national standards. Examples of state guidance principles include:
- Arizona's statewide policy requires users of the technology to adhere to requirements and considerations related to transparency, accountability, fairness, security, privacy, training, procurement, and collaboration.
- The Massachusetts Executive Office of Technology Services and Security established minimum requirements for the development and use of generative AI by state agencies. The guidelines incorporate the NIST AI Risk Management Framework to reduce risk and promote trustworthiness.
- Vermont's AI Code of Ethics identifies conflict of interest, bias and confidentiality concerns and highlights attributes to focus on such as safety, security, accountability and trustworthiness.
Colorado, Georgia, Maine, Maryland, New York, North Carolina, North Dakota and Washington referenced the NIST standards within their guidelines, while New Hampshire based its guidelines on the European Union ethics guidelines document on AI.
Procurement
State employees responsible for information technology and purchasing are incorporating considerations for AI within their current processes. The 2024 National Association of State Technology Directors survey, AI in State Government IT Operations, reported that 9% of survey respondents have developed preferred contract language around the use of AI for IT procurements; 62% are in the process of doing so and 29% have not yet begun efforts. A report from the National Association of State Procurement Officials and the National Association of State Chief Information Officers shows that successful AI initiatives in public procurement require robust collaboration between procurement and chief information officers and must be supported by robust AI policies. The joint report identified seven key factors for AI public procurement to be successful: 1) develop comprehensive AI polices; 2) start with targeted use cases; 3) foster collaboration between procurement and IT; 4) engage vendors and suppliers effectively; 5) prioritize training and change management; 6) focus on ethical and responsible use; and 7) establish performance monitoring, continuous improvement and training.
Examples of state AI procurement processes include:
- California released guidelines for public sector procurement, uses and training for generative AI. To use a generative AI product, state entities must go through a multi-step process that includes outlining a problem definition, assessing impacts and requiring a "human to be in the loop." State entities are allowed to submit budget requests through the annual budget process for generative AI proof of concepts. California requires state purchasing officials to take training on how to identify generative AI purchases.
- In Ohio, the policy for procuring new generative AI software requires review and approval from a multi-agency AI council that includes representatives from the governor's office and the Department of Administrative Services. The request must include a risk assessment, a privacy assessment, and a security review.
- While the Oregon State Government Artificial Intelligence Advisory Council works to develop an AI framework, interim guidance instructs state entities to submit an information technology request prior to investments in AI proof of concepts or pilots.
- Washington released an automated decision systems procurement and use guidance that requires an assessment to be conducted before the system's development or procurement. The procurement and development process also must include testing and validation to assess performance, accuracy and potential bias before deployment.
How are state governments using AI?
State agencies are using tools that have a range of capabilities like robotic process automation, natural language processing, machine learning and content generation. This use is seen across sectors as AI assists states with improving physical infrastructure, optimizing government resources and assisting citizens with inquiries.
State agencies have seen a steady increase in chatbot use since the COVID-19 pandemic. During the pandemic, at least 35 states used chatbots to support pandemic inquiries relating to health, unemployment benefits, taxes, Supplemental Nutrition Assistance Program benefits and citizen services. A 2024 survey of state technology directors use of AI, showed half of states are using chatbots, 36% are using it for office productivity and 26% are using it for code development. This survey found the four highest-ranked use cases for AI were cybersecurity, citizen portals, data management/analytics and office worker efficiency.
State legislatures have enacted legislation that includes funding for specific AI use in state government. Examples of those actions are:
- In 2021, Ohio required the Department of Medicaid to pilot a program using automation and artificial intelligence to provide program savings.
- In 2022, the Florida Legislature appropriated funds to the Department of Health for the development of an AI customer service solution.
- In 2023, West Virginia created a pilot program to incorporate machine learning, AI or other advanced technologies to assess state roads.
- In 2024, the Hawaii Legislature appropriated funds to the University of Hawaii to establish and implement a two-year program to develop a wildfire forecast system for the state using AI.
States have started to pilot uses of AI through a variety of ways, with an increase in activity in 2024 and several in a proof-of-concept phase. Five states have initiated pilots through different approaches in 2024, including:
- In Arkansas, a working group launched by the governor is reviewing a set of pilot projects on unemployment insurance fraud and recidivism reduction to craft best practices for safe implementation of AI across state government.
- California announced partnerships with five vendors to test, iterate and evaluate generative AI proof of concepts looking at solutions for problems like: enhancing customer service; improving health care facility inspections, reducing highway congestion, and improving roadway safety.
- The Massachusetts General Court appropriated $25 million for studying, planning and procurement of AI and machine learning systems for state agencies in alignment with enterprise security policies.
- In Pennsylvania, the governor announced a pilot program in partnership with OpenAI's ChatGPT Enterprise. State employees in the Office of Administration will have access to the tool to help determine how AI tools can be incorporated into government operations.
- Utah enacted a law in 2024 that creates an Artificial Intelligence Learning Laboratory Program to analyze the risks and look at opportunities of AI to inform legislation and regulation. In exchange for the partnership with the state, a participant may apply to temporarily waive legal and regulatory requirements for AI testing purposes.
Many states have focused specifically on generative AI applications in their AI government guidance. Colorado's statewide GenAI policy prohibits the use of the free version of ChatGPT on any state-issued devices because the governor's Office of Information Technology identified the terms and conditions violated state law. Under the guidance, AI that uses machine learning without a generative component, such as fraud detection, spam filters or autocorrect software for spelling are allowable uses without further approval.
In 2024, New Hampshire enacted legislation setting prohibited and allowable uses of AI by state agencies. All materials produced with generative AI must include a disclosure. Additional examples of states issuing guidance on the government use of generative AI include: Kansas, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, North Carolina, Pennsylvania, South Dakota, Washington and Wyoming.
How are state legislatures currently using generative AI?
Some state legislatures have begun to experiment with open-source AI tools to assist with internal processes, while others have started to partner with large service providers like Microsoft and Amazon to build legislative applications. The Indiana General Assembly, for example, has developed the beta version of a generative AI chatbot that is open to the public and capable of answering questions about state statutes and regulations.
More broadly, results from a spring 2024 NCSL survey of state legislative staff show that they have begun using generative AI tools like ChatGPT and Claude for a variety of purposes, including for research, creating first drafts of documents and editing text. Staff reported they have also begun using, or considered using, other generative AI tools for tasks like transcribing hearings and debates, bill drafting, cybersecurity and constituent relations. Likewise, commonly used programs like those in the Microsoft suite and legal tools like LexisNexis are beginning to gain generative AI functionality, which some legislatures have begun experimenting with.
As legislative staff begin incorporating these tools into their work processes, some legislatures are drafting and implementing related policies, with particular attention being given to the risks around exposure of sensitive information and inaccuracies in AI-generated content.
According to the spring 2024 survey results and other information collected by NCSL, policies vary by state and in most instances apply to individual offices rather than legislatures as a whole. Some policies prohibit any use of these tools for legislative work, some provide general guidelines and encourage staff to exercise caution while using them, while others require permission from a manager or only allow use of certain approved applications.
For additional information about how state legislatures are using of these tools, see the results of the recent NCSL survey.