California officials share first impressions of generative AI exploration

As California prepares to publish its initial recommendations for generative AI use, two top officials share how the process is unfolding.
California capitol building
(Getty Images)

California’s first report on the use of generative artificial intelligence in state government is due next month, after an executive order Gov. Gavin Newsom signed in September.

The order requires state agencies to create risk assessment reports for how AI could affect their work, California’s economy and the state’s energy usage. Amy Tong, secretary of the California Government Operations Agency, is one of the officials responsible for developing recommendations for new policies and regulations to ensure that AI tools are purchased, developed and used responsibly — though she would rather not call it a task force.

“It’s a collective of public and private entities coming together,” Tong told StateScoop. “There’s so many interests in the state and the State of California. We are literally at the beginning of trying to develop the team for the implementation of generative AI.”

The singular focus of the team, Tong said, is determining how the state can use generative AI to improve the customer experience for residents receiving public service.


California, the most populous state in the nation and fifth largest economy in the world, is one of dozens of states across the country racing to establish standards for how governments use AI.

California Chief Information Officer Liana Bailey-Crimmins, who also sits on the team, said her office is planning to publish procurement guidelines for generative AI in the report next month and early next year create a “sandbox” for agencies to test new technologies in a controlled environment.

“Just like we did with machine learning and other types of projects, they have to go through a rigorous stage process to say how, and if, this is the right technology in order to get the benefit to the state,” Bailey-Crimmins told StateScoop.

Avoiding traps

Generative AI tools are built on deep-learning models trained on massive amounts of data that can generate text, images and other content that appears to be human generated. Across the country, states are racing to determine how best to use the rapidly advancing technology in their operations — a daunting task, Tong said.


“How do you leverage the benefits it brings, but also with a good understanding of the risk that comes along with it, so that programmatically and the policy development aspect, that people have a good understanding of how to utilize such technology?” Tong said.

Thoroughness is key, said Bailey-Crimmins, adding that next month’s report needs to be comprehensive in order to safeguard public services and ensure the successful adoption of new technology.

“You don’t want to get in a situation where you’ve implemented something, can’t continue to grow and enhance and adjust based on your constituents’ needs,” Bailey-Crimmins said. “Those are things we kind of look for to make sure that people don’t fall into that trap.”

Tong said she and her team will recommend pilot programs the state can use to test the efficacy of generative AI, with a focus on procurement guidelines and staff training to ensure the government’s workforce isn’t left behind.

“We’re taking the middle ground,” Tong said. “Let’s figure out what the impact and what are the risk mitigation that could come along with it by doing some pilots.”


‘Where we’re going’

Both Tong and Bailey-Crimmins emphasized the importance of transparency in the upcoming report, especially when implementing a technology like generative AI, which has the potential to disrupt how government services are offered.

“Once you have an idea of what can be done, we have to put ourselves in the lens of our residents, what type of services or what type of potential changes they might see in the delivery of the services to them,” Tong said. “Maybe it’s nothing, maybe all they feel is, ‘Oh, things are easier, things are faster,’ but you’ve got to maintain the level of transparency. Trust is very, very important to us.”

Bailey-Crimmins said the state will also consider new cybersecurity risks posed by generative AI.

“From a security perspective, generative AI brings forth not only traditional AI type of risk but could also bring new and amplified risk, so making sure we have the right terms and conditions to hold any vendor we’re working with accountable,” Bailey-Crimmins said. “As a state, we owe it to the public to continue to build trust, don’t get out in front of our skis, and make sure that we’re doing things in a very orderly and balanced fashion.”


Newsom’s executive order lists due dates for deliverables, including reports, employee trainings, pilot programs, among other things, until January 2025. And though the frenzy around successfully implementing generative AI has created some urgency for state governments, Bailey-Crimmins said she isn’t concerned with the timeline.

“You have to show the vision and the direction,” she said. “Some of it’s gonna take two years to get there, some may take six months, but this is where we’re going.”

Colin Wood contributed reporting.

Sophia Fox-Sowell

Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.

Latest Podcasts