Growing numbers of legislators and staff use generative artificial intelligence for doing research, summarizing legal cases and drafting documents, but it’s far from a majority yet, according to panelists at NCSL’s Base Camp 2024.
Nevertheless, legislators and staff are increasingly setting policy for the fast-evolving tool that promises both convenience and challenges.
“There are still many important issues for legislatures to figure out,” says Molly Ramsdell, vice president of NCSL’s State-Federal Affairs Division. “Legislative institutions will need to ensure that generative AI does not compromise the accuracy and quality of legislative work products, and in terms of privacy, legislatures may need to develop policies and procedures that protect sensitive data.”
“No. 1, legislative staff are currently using AI tools for legislative work, so it’s not a hypothetical.”
—Will Clark, NCSL’s Center for Legislative Strengthening
Will Clark, a program principal for NCSL’s Center for Legislative Strengthening, shared findings from NCSL’s survey of legislative staff about their use of AI tools in the workplace.
“No. 1, legislative staff are currently using AI tools for legislative work, so it’s not a hypothetical, it’s not something that we’re mulling over,” Clark says. He says research—for historical backgrounds, legal summaries and news aggregations—tops the list of AI uses by legislative staff.
Clark says AI policies are typically being developed for individual offices, not for whole legislatures. The most common policies prohibit the use of AI tools, he says, citing the survey conducted six months ago. “A lot of legislatures were still wary of these tools and still trying to figure out how they wanted to implement them for legislative work, so I think a lot of legislatures kind of put the hold on usage temporarily.”
He says policies commonly direct staff to exercise judgment and critical thinking, and they often outline risks and how to mitigate them.
The risks come mostly from trusting AI too much, says Sean McSpaden, principal legislative IT analyst and AI task force member with the Oregon Legislature. He also serves on the NCSL Task Force on Artificial Intelligence, Cybersecurity and Privacy.
McSpaden says AI can provide research, write texts, emails, in-depth papers and even generate videos of avatars who look and sound real. He demonstrated that with a short video of a man offering information about AI, completely generated by AI. It was created by the Center for Digital Government using the platform Synthesia, which lets users create AI-generated videos by typing in text.
He says AI has a potential role in an array of government operations, including “some of our back-office functions, our corporate office functions, those administrative functions that all of us either touch or are involved with or are responsible for administering in our particular organizations.”
All this potential comes with accountability, he says.
“I’m using it at work, I’ve got a choice to make: Do I believe the AI-generated summary is accurate and trustworthy and go no further? Or do I really need to make that proactive decision to dig a little deeper on my own to verify and validate those results to make sure they’re accurate, to make sure that they’re fit for the purpose of my work assignment?” McSpaden says.
Human oversight is critical because the credibility of the institution is on the line, he says.
And McSpaden urges legislators and staff to actively explore it now.
“We need to allow our employees to take this AI car out on the road to test it out in different driving conditions so we know how it’s going to work, and so we know what boundaries we ought to put on its use within the legislative environment,” he says.
Luis Kimaid, executive director of Bússola Tech, agrees. His Brazil-based organization works with parliaments internationally to support their exploration of AI.
“Parliaments are inherently a human institution designed to mirror human values, our history, our traditions,” Kimaid says. “So, we don’t believe it’s productive to have a discussion on AI replacing human expertise, rather on how AI can enhance this expertise to make our work faster, our work better, and on providing better services for the members and for the citizens we all serve here.”
Bússola offers free resources, including the report “Key Considerations of AI in Parliaments,” which explores legislative drafting, historical archiving, procedural guidance, constituent relations and other uses to help legislative bodies devise their own approaches to AI.
AI is inevitable and growing exponentially, the panelists say. And while a lot of its uses are apparent—as simple as coming up with a clever bill title or acronym—it also brings capabilities “we can’t even imagine yet,” McSpaden says.
“The explosion of generative AI—it’s making us all hurriedly think about a whole bunch of things all at once, asking ourselves key questions,” he says. “Like how do our employees feel about the use of AI? How will their work change, will there be job loss to any significant level? Will our employees or those that we serve really trust the output of these solutions? What kinds of guardrails are we going to have to put in place as an organization?”
For more on AI, see NCSL’s Artificial Intelligence Policy Toolkit.
Kelley Griffin is a senior editor in NCSL’s Communication Division.