Kansas’ generative AI policy sets flexible guardrails

A new statewide policy in Kansas makes room for uncertainty in the rapidly changing domain of generative artificial intelligence.
OpenAI logo
A smartphone displays the ChatGPT logo in Washington, D.C., on March 15, 2023. (Getty Images)

A new generative artificial intelligence policy outlines how executive branch agencies in Kansas can embrace the new technology in a way that prioritizes citizens’ data and privacy while remaining nimble enough to adapt to new innovations.

The policy from the state’s Office of Information and Technology Services went into effect in late July. Jeff Maxon, Kansas’ interim chief information technology officer and chief information security officer, estimates the new policy’s already been modified three times.

While other states have opted to establish task forces to assess how the technology can impact the state, Maxon argued that time was of the essence in creating a formal policy for Kansas.

“One of the things we made pretty clear is how fast the technology is advancing,” Maxon said, noting that Kansas will likely establish a task force at some point. “We had to get something in the short term because it’s just moving so fast. It’s out, we can’t ignore it and so we just have to start looking at embracing the technologies that come out in a safe and secure way where we’re respecting our citizens’ privacy and protecting their data.”


The three-page policy applies to all business involving the state, including but not limited to the development of software code, written documentation and correspondence, research, summarizing and proofreading documents and making business decisions.

Responses generated by an AI must be reviewed by “knowledgeable human operators for accuracy, appropriateness, privacy and security before being acted upon or disseminated,” the state’s policy reads. Those responses cannot be used verbatim or to issue official statements. Maxon said taking such an approach helps protect against concerns of data manipulation and other issues with a technology that is in its infancy.

Maxon shared an example of a demonstration on AI held for a state legislator this year in which he asked where to find the best barbecue in Topeka — it’s answer was a business that had been closed for six years. He ran the same question the following day and its response was “a little bit better” and provided three answers, but the closed restaurant still made the list, he said.

While the technology may not be perfect, Maxon said, treating the state’s policy like a living document rather than something that only gets reviewed every few years, Kansas positions itself to adjust its guardrails as generative AI improves.

“Really, the goals [of the policy] are to allow agencies the freedom and flexibility to start exploring the potential and what that looks like, but being aware of some of the risks to security and privacy and the unknowns that are still associated with it,” he said.


One of the policy’s safeguards notes that restricted-use information cannot be provided when interacting with generative AI — Kansas is treating everything entered into tools like ChatGPT as part of the public record.

“We need positive control of the data to ensure we have privacy,” Maxon said. “We don’t want vendors using that as an ability to mine citizens’ data to profit off of. Our contracts are very specific to the services they provide and they shouldn’t be using the data for anything else.”

So far, generative AI has primarily been used by developers in Kansas, Maxson said, but he added that it’s been especially useful updating legacy applications that use old languages like COBOL.

“We can’t find COBOL programmers, but we can actually have it help us program in COBOL or convert languages,” Maxon said, adding that he anticipates AI will be a helpful tool for users to navigate frequently asked questions.

Latest Podcasts