Education tech policy groups call on federal officials for AI guidance
Nineteen technology policy and advocacy groups on Tuesday penned a letter to White House officials, responding to recent research findings that student privacy and equity are being eroded by schools’ use of technologies.
In a letter addressed to U.S. Education Secretary Miguel Cardona and White House Director of Science and Technology Policy Arati Prabhakar, the groups referenced a report published this month by the Center for Democracy and Technology the sounded alarms about student privacy. Though the White House issued its “AI Bill of Rights” blueprint last year, the coalition is asking the Education Department to issue further guidance on the use of technology in schools, such as online monitoring and content filtering, and to clarify that student civil rights laws apply to those uses.
The letter’s signatories include the American Civil Liberties Union, the American Association of School Librarians, the American Library Association and the National Center for Learning Disabilities, among other groups.
“For the second year in a row, we found that schools are failing to use technologies in responsible ways that respect students’ rights,” Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, said in a press release. “The Department of Education has the authority to clarify, provide guidance and enforce decades-old civil rights protections to the use of technology in schools.”
The center’s report found that a majority of teachers reported that their school uses online monitoring technologies powered by artificial intelligence and content filtering or blocking software. About one-third of teachers reported that that software blocks LGBTQ+ and race-related content, “amounting to a digital book ban.” Further, 67% of teachers said they knew a student who had gotten in trouble due to content filtering or blocking software.
“One of the reasons that this is problematic is because the stated reason that schools are using this technology is oftentime to keep students safe,” Elizabeth Laird, director of equity in civic technology with the Center for Democracy and Training, said in a press call about the report on Monday.
The report also says that more than half of students reported having used generative AI, and students with individualized learning plans or accommodations for a disability were more likely to use the technology. Fewer than half of the responding teachers said they’d received substantive training on generative AI, and less than a quarter of teachers reported receiving training on how to respond if they suspected a student has used generative AI.
“There’s a real gap between maybe what teachers perceive students are doing and what they’re actually doing and they haven’t gotten the types of support from schools to really help them make informed decisions and so what that’s creating is that students are getting in trouble and that if you have an [individualized learning plan] or a [disability] you’re more likely to get in trouble because of this,” Laird said.
The coalition’s letter argues that students are being denied the right to a free, appropriate public education, which exists to guarantee that students with disabilities receive an education that meets their individual needs.
“Every student who is negatively impacted by these technologies may experience a loss of civil liberties, privacy and educational opportunities,” the letter reads. “It is crucial to consider this impact against the backdrop of decades of nondiscrimination leadership in the education sector.”