ChatGPT said it will ‘revolutionize’ digital government. Human experts are more cautious.

When asked, ChatGPT said chatbots may "revolutionize" how people deal with government. Human experts were far more cautious.
chatgpt logo
(Olivier Douliery / AFP)

“As artificial intelligence (AI) continues to advance, state and local governments are beginning to explore how AI tools like ChatGPT could change the digital services they offer to residents. AI-powered chatbots are already being used by many governments to automate services, such as Medicaid enrollment, drivers license renewals, small-business resources, and more. However, the next generation of AI, which includes ChatGPT, has the potential to revolutionize the way governments interact with their citizens.

That rosy assessment, predicting a bright future in which next-generation chatbots speed people with clarity and ease through any number of currently time-consuming interactions with government, comes not from any wide-eyed consultant or hype-laden PR pitch. Rather, it came directly from ChatGPT, the text-generating artificial intelligence tool that’s captured the internet’s attention since it was released last November.

The buzziest conversations around ChatGPT have revolved around some of its more nefarious potentials — students cheating on their homework, newbie hackers writing malicious code, cheap publishers undercutting journalists. But as for overhauling the way governments interact with their citizens, the revolution may not be as immediate as everyone’s new favorite chatbot predicted, according to actual humans who work with artificial intelligence and digital services on a regular basis.

While ChatGPT and similar products scour massive volumes of information to generate their conversational responses, such a chatbot operating in a government setting would have to be configured to recognize how certain programs and services work, how to interact with residents from a variety of backgrounds and to ensure that any responses treat people equitably.


This article began by developing a prompt asking ChatGPT to write a detailed article about how it, and its generation of natural language processing tools, are going to change the digital services offered by state and local governments, especially those that currently feature some kind of chatbot. While the bot’s own predictions could accurately be described as wildly enthusiastic, human technologists offered a mix of intrigue and skepticism.

Conversational, not clinical

One tech official who is eager to get his hands on the text-generating tool is Los Angeles Chief Information Officer Ted Ross, who said he’s “extremely excited” about the potential for city services to one day be equipped with virtual assistants that answer residents’ queries with human-seeming answers that are more helpful than a list of links.

“AI can dramatically change the way we search and the kinds of results we get, so you’re not just getting links to click on that you didn’t have to hunt through,” he said. “You’re actually getting a written response that summarizes it for you. That’s super-important.”

Chatbots have been widely used in state and local government for years, most often as a kind of search engine to help residents find the right information on anything from DMV services to unemployment claims to rental assistance. Many states also rushed chatbots into production during the early days of the COVID-19 pandemic, using them to speed along applications for emergency benefits or swatting down misinformation.


But the big appeal of ChatGPT and other so-called large language models is that they’re capable of generating detailed responses that mimic human conversation, rather than spit out clinical directions to another webpage. By scanning billions of data points and being programmed to correspond to certain behaviors or subject areas, these bots are effectively looking at a string of text and predicting what words should come next.

Running the prompt for this article over and over produced dozens of different responses, with varying degrees of accuracy.

When rattling off examples of chatbots and virtual assistants that could be improved by its capabilities, ChatGPT identified some that actually exist, like a Texas Workforce Commission bot that helps business owners and unemployed people navigate that agency’s website, or one offered by the Florida Department of Children and Families to help people find applications for services like nutrition assistance and Medicaid.

It also made up some of its own bots:

“Meanwhile, in Los Angeles, the city’s Department of Transportation is using AI-powered chatbots to provide real-time traffic updates and help commuters plan their routes,” ChatGPT wrote. “This has helped to reduce traffic congestion and improve the overall efficiency of the city’s transportation system.”


Alas, City of Los Angeles’ website has a chatbot, but don’t ask it how long it’ll take to get to Dodger Stadium. Right now, it’s limited to trawling through information contained in the city’s 311 directory. But that doesn’t mean it couldn’t eventually be much more expansive and dynamic in how it responds to Angelenos, Ross said.

“We have all these questions people ask about paying for parking tickets or filling a pothole or resources at the library,” he said. “Governments have tremendous amounts of information, and governments historically do a very bad job of making the information understandable.”

Inherently value-laden

But like all AI tools, ChatGPT is only as good as the data it can access, said Michael Ahn, an associate professor of public policy at the University of Massachusetts Boston, who’s studied the potential of language models. While ChatGPT was launched capable of searching all information on the internet through late 2021, it’s been updated to include more recent data. Though to be truly useful in a government setting, chatbots built on the model will have to be able to process real-time information, as well as what a user is seeking to accomplish, he said.

“Currently, government chatbots are similar to a traditional search engine,” he said. “It might point you in the right direction. Trained in government services, government regulations and also on where citizens are coming from and what they need, then you can kind of see the difference. The difference is what you get if you use a traditional search engine versus using ChatGPT, which understands your needs, circumstances and purpose.”


But that also means that governments that want to integrate ChatGPT or other large language models — like Google’s Bard or Meta’s LLaMa — into their digital services will have to create new guardrails to ensure that decisions assisted by this technology are made fairly and that any sensitive information users share isn’t mishandled.

When asked about its ramifications on privacy and equity in digital government, ChatGPT offered only a surface-level response: “In terms of safeguarding citizens’ data privacy and ensuring equity, AI models like ChatGPT can help by offering secure and personalized digital services to citizens. ChatGPT can tailor responses based on the user’s location, language, and other demographics, ensuring that all citizens receive equal access to government services.”

It’s not quite that simple, said Ahn. While an advanced data and language-processing model may eventually be able to give real-time analyses about traffic conditions or offer a chattier interface for a person applying for, say, a business permit, there are many government services, like those related to public welfare, that will always require a human touch.

“Government decisions are inherently value-laden,” Ahn said. “These areas, like public welfare and homelessness, anything with the human element that requires some value judgment, ChatGPT may not be the best.”

Sam Altman, the CEO of OpenAI, the software lab behind ChatGPT, admitted as much in February, writing that the model “has shortcomings around bias” after reports that it was generating responses full of racist and sexist language.


Ahn recommended that any government agency looking to integrate large language models into its services adopt some kind of oversight mechanism to review these systems for biases that result in policy decisions that harm already disadvantaged groups.

“It’s going to be the role of government officials to review decisions and have the final say,” he said. “I believe AI is going to be used in government and everywhere in the world. But human agents make the final decisions. We probably need a dedicated agency looking into this, to check the bias and the transparency of data and algorithms.”

Waiting for proof

Delaware CIO Jason Clarke told StateScoop that while there hasn’t been much discussion in his agency about ChatGPT, it has come up as a potential tool to process all the information that’s fed into the AI applications the state is using today. Delaware uses AI to support functions like monitoring traffic conditions and detecting network intrusions — analyzing large volumes of data much faster than any humans can.

A few Delaware state agencies also have chatbots. Clarke has no plans to dive into large language models just yet, though he said the public sector may one day embrace them after they take off in the private sector.


“I think the private sector has been able to showcase how it’s leveraged in business, and the state will be buying into those products too, as they become more mainstream,” he said. “They have a little bit more opportunity from an R&D perspective to be bleeding edge and to try it out in certain areas. Usually that turns into some service offering, at which point in time state and local government can buy since we rarely have the ability to build.”

And for all of his professed excitement, Ross, the Los Angeles CIO, conceded the conversation around ChatGPT is caught up in what the IT consulting firm Gartner defines as a “hype cycle.”

“You’ve got the run to the peak of inflated expectations,” he said. “There’s going to be a disillusionment, and then there’s going to be really getting traction out of the technology, and using it for what is.”

The human touch

Overall, the emergence of AI-powered chatbots like ChatGPT represents a significant opportunity for state and local governments to improve their digital services and better serve their residents. By leveraging the latest advancements in AI technology, governments can reduce costs, improve efficiency, and provide more personalized and effective services to their constituents.


That’s how ChatGPT summed up its potential application in government. But what’s not mentioned there, or in any of the dozens of sample articles it generated, is that some government decisions will always require a human touch — a fact even an enthusiast like Ross was quick to mention.

“I feel like we’ll always have a call center, we’ll always have people who are either much more comfortable or feel like it’s much more important that they get in touch with a human being,” he said. “There’s always going to be something that requires that human touch at some point, not to say we couldn’t get away with it, but I just feel like our elected officials don’t want to be seen as reducing customer service.”

But AI-enabled interactions with government could be on the precipice of advancing quickly, Ross said. Depending on how willing city or state is, virtual assistants could soon be using large language models to help with more basic tasks, like paying parking tickets or filling out a form for a business license.

“I think the call center and the call center operator, at least for some of the more straightforward items, we should be in that conversation the next year or two, which is a huge leap forward,” he said.

But ChatGPT and its ilk are still black-box environments and any products built on them will only be as good as the quality of the data and model training they receive, said Ahn, the UMass Boston professor.


“Having ChatGPT itself is not going to do anything,” he said. “It goes back to data. They need to take care to produce, collect and then train with the data, and then watch out for the guardrails. You have to do it over and over. You have to watch out for bias, feed it more relevant data and then continue to train so you can have a more meaningful outcome.”

This story was featured in StateScoop Special Report: Digital Services — A StateScoop & EdScoop Special Report

Latest Podcasts