Artificial intelligence isn’t human—it just acts like it is. And you should treat it that way, says Jennifer Pahlka.
The Code for America founder told the opening general session of the 2024 Legislative Summit to consider AI an intern. “You may have some fantastic interns, but you’re always going to have a more senior staff member look over anything that your interns might write and make sure that it is correct and make sure that it is consistent with you and your values and what you’re trying to do for the people that you serve. And I think AI can be, especially generative AI, can be a great tool for all the things that you’re doing.
“Your team is going to be responsible for that work,” says Pahlka, a senior fellow at the Niskanen Center and the Federation of American Scientists. “We don’t give over power to AI just because it is a powerful tool.”
“Assume this is the worst AI you’ll ever use. It gets better all the time.”
—Jennifer Pahlka
If you’re going to treat AI like a human, Pahlka says, it’s important to ask what it can do to help: HABA—what humans are better at—versus MABA—what machines are better at. “And now we’re at the point we have to talk about not just humans and machines, but the old-school software is better at certain things and large language models—generative AI, predictive modeling, other kinds of AI are going to be better for different things.”
She says executive agencies and legislative staff are going to be looking at what they already do and asking how they can do it better using tools like AI. “You’re going to be using it to draft constituent letters more quickly. You’re going to be using it to draft legislation. Sometimes, there’s going to be all these things that you already do that you’re going to potentially be able to do better if you use it, right? But your job is to ask the question, What needs doing that we haven’t been doing, and how are we going to do that?”
And if AI doesn’t understand the assignment? Just remember, Pahlka says, to “assume this is the worst AI you’ll ever use. It gets better all the time. So, I don’t give up just because it wasn’t useful to me in that moment. I’m going to keep coming back because these things are getting powerful at an alarming rate.”
Pahlka says she’s frequently asked by U.S. congressional staff what mandates and controls they can impose on agencies to ensure better technology outcomes.
But she thinks that’s the wrong question.
“I say, ‘You’ve been adding mandates and controls for 40 years now. How’s it working?’ There’s another model, an enablement and capacity-building model. So, you can double down on process or you can shift the outcome that you want.”
Pahlka says that many times, the problem isn’t with the people—it’s with the process. “Very often, you’ve got people who are quite competent but overly constrained. You can assume something’s wrong with them or you can assume there’s something wrong with the system—and there is something wrong with our system. You can add controls or you can ask, ‘How might I take some controls off of you so that you can do the job that we asked you to do?’”
The problem, Pahlka says, is that use of technology has been constrained by systems that continually use the gas and brake pedals at the same time. “You’re degrading trust, and you need that trust, especially now in this era of AI where things are going to move so fast and where we need to be able to push that gas pedal and maybe come off the brake a little bit.”
Pahlka says lawmakers are in charge of the agenda around how to use AI—and using AI to peel back the layers of outdated technology can help reduce complexity and make government more responsive and functional.“I get that it’s complicated, but it has to make sense to a person,” Pahlka says. “So, we can use AI to understand these things, but we can use AI to simplify them so that we end up with government that makes sense to a person.”
Lisa Ryckman is NCSL’s associate director of communications.