A legislative committee held a second day of hearings on artificial intelligence on Tuesday to hear how state governments and federal officials are handling concerns about the technology.
Last year 35% of organizations reported that they were using AI technology in their business.
The rapid growth of artificial intelligence has left state and federal governments scrambling to figure out if it’s safe and how or if it should be regulated.
Heather Morton is director of the National Conference of State Legislatures. She told state lawmakers on the Science, Technology & Telecommunications Committee that ensuring AI products are safe and secure is a top priority.
Since 2019, nearly one quarter of states have introduced or enacted legislation regarding AI. In California, local agencies must provide information about any job losses or replacements due to the technology.
Last year New Mexico proposed establishing a center for dryland resilience that would have used computer modeling and AI to diagnose and predict vulnerabilities. The bill never passed.
But Morton says states are at the forefront.
“Legislators are really trying to get their arms around what AI is being used at the state level,” she said.
With no major federal legislation, states are focusing on regulating AI use, and protecting people from any potential harms.
Morton says among lawmakers’ concerns is deep fake technology that manipulates either audio or video to create a false but realistic recording of individuals doing or saying things that they didn’t actually do or say.
California and Texas enacted laws in 2019 that prohibit the distribution of deceptive audio or visual media that seeks to injure a candidate’s reputation or deceive voters.
Morton says the federal Office of Management and Budget will release a draft policy guidance soon, which will serve as a model for state and local governments, businesses and others when using AI.