Vermont calls for AI ‘code of ethics’
Members of a first-of-its-kind Vermont state task force on artificial intelligence say regulating the technology itself would have unintended consequences, but that they also see promise in creating a “code of ethics” that could drive responsible use of AI within the state and position Vermont as a leader nationally.
The 14-member task force released its recommendations Wednesday after a series of monthly meetings that began September 2018. The group, including representatives from government, academia, industry and civil liberties groups, studied current and future applications of AI, as well as how to ensure ethical testing and use without inhibiting innovation.
While other states have launched AI task forces, the Vermont group concluded that immediate action in the form of creating a permanent commission would benefit the country as a whole.
“I’m a huge believer in the ‘brave little state’ [of Vermont], and I think we should be a leader,” said John Cohn, a Vermonter and IBM Fellow in the company’s Internet of Things lab. “What I mean is that we should be a leader among other states. There’s no federation of states talking about this.”
Cohn told StateScoop that trying to legislate the algorithmic component of artificial intelligence, or how companies are able to perform research and development on their own products, could limit innovation and business growth in the state as an unintended consequence. Rather than legislating AI itself, he said, lawmakers should place regulations around where the technology can be applied, whether it be in public-safety products, autonomous vehicles or other emerging technologies.
“It isn’t like there’s a line of AI code that makes it somehow regulated, it’s what you do with it,” Cohn said.
The report also identified several sectors that could benefit from increased AI adoption and development, including precision agriculture, public safety and public health. To avoid infringing on civil liberties and reductions in employment as the technology develops, however, the task force would need to be made permanent, the report says. Task force member Eugene Santos Jr., a Dartmouth College engineering professor, said the idea is to create an independent agency that government officials and the public could go to with ideas, questions and concerns about the technology.
“AI is crosscutting,” Santos said. “The last thing Vermont agencies want is that one comes up with a policy, another comes up with a policy and you just find that ‘oh, they’re in conflict,’ and there’s nothing uniform about anything.”
In the report, the task force laid out a draft code of ethics that could serve as guidelines both for legislators and businesses. Modeled after the European Union’s guidelines, the task force said AI should be manufactured with fundamental respect for human dignity, individual freedom, democracy, equality and citizens’ rights, including the right to vote or right to protest. The proposed code, which also includes requirements such as human oversight and transparency of AI development, would be a working document maintained by the state’s permanent commission, if it were implemented.
Vermont currently has no legislation in place to regulate artificial intelligence, which is used in virtually all autonomous vehicles and facial-recognition programs. Despite not agreeing upon a concrete definition of artificial intelligence, the task force recommended the state offer small-business grants and competitions to foster AI industry growth in Vermont, and that the state create outreach programs to promote AI education in schools.