Last March, Hawaii State Senator Chris Lee introduced legislation urging the U.S. Congress to examine the benefits and risks of artificial intelligence technologies.
But he didn’t write it. Artificial intelligence did it.
Lee asked ChatGPT, an AI-based system trained to follow instructions and conduct conversations, to draft legislation highlighting the potential benefits and harms of AI. Within moments, this produced a resolution. Lee copied and pasted the entire text without changing a word.
THE resolution was passed in April with bipartisan support.
“It was a statement that using AI to write legislation – an entire law – was perhaps the most important thing we could do to demonstrate what the good and bad sides of AI might be. ‘AI,’ Lee, a Democrat, said in an interview with Stateline.
ChatGPT, which received widespread national coverage this year, is just one example of artificial intelligence. AI can refer to machine learning, in which companies use algorithms that mimic the way humans learn and perform tasks. AI can also refer to automated decision-making. More broadly, the words “artificial intelligence” can conjure up images of robots.
Although organizations and experts have attempted to define artificial intelligence, there is no consensus on a single definition. This leaves individual states wondering how to understand the technology so they can put rules in place.
“There is no silver bullet for what to do next,” Lee said.
The lack of a uniform definition poses a challenge to lawmakers trying to craft regulations for this growing technology, according to a report of the National Conference of State Legislatures. The report comes from the NCSL Working Group on Artificial Intelligence, Cybersecurity and Privacy, comprised of legislators from about half the states.
Many states have already passed laws to study or regulate artificial intelligence. In 2023, lawmakers in at least 24 states and the District of Columbia have introduced AI-related bills, and at least 14 states have adopted resolutions or enacted laws, according to a report. analysis of the national legislative group.
Some, like Texas And North Dakota, created groups to study artificial intelligence. Others, among them Arizona And Connecticutaddressed the use of artificial intelligence systems within state government entities.
Connecticut’s new law, which will require the state to regularly evaluate its AI-containing systems, defines artificial intelligence in part as “an artificial system” that performs tasks “without significant human oversight or that can learn from information.” ‘experience and improve this performance when exposed to data’. sets.
But each state that defines AI in its legislation does so differently. For example, Louisiana In a resolution this year, it was stated that artificial intelligence “combines computing and robust data sets to enable direct-to-consumer problem-solving measures.”
“I think the definition is so gray because it’s such a broad and expanding field that people generally don’t understand,” Lee said.
AI is a touchy subject, but Rhode Island state Rep. Jennifer Stewart, a Democrat who serves on the state’s Innovation, Internet and Technology Committee, said Uncertainty should not prevent lawmakers from moving forward.
“I believe we can regulate and exploit what we have created,” she said. “And we shouldn’t be nervous or afraid about wading into these waters.”
Other efforts to define AI
The National Artificial Intelligence Initiative Act of 2020 sought to define AI, describing it as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments”, according to the federal lawwhich was promulgated on January 1, 2021.
That of President Joe Biden Plan for an AI Bill of Rightsa set of guiding principles developed by the White House for the use of automated systems, expands the definition to “automated systems that may have a significant impact on the rights, opportunities, or access of the American public to critical resources or services.”
THE European Union, Googlea trade group known as BSA | The Software Alliance and many other entities have given similar but different definitions of artificial intelligence. But AI experts and lawmakers are still identifying a conclusive definition — and debating whether a concrete definition is even necessary to pursue a regulatory framework.
At the most basic level, artificial intelligence refers to machine-based systems that produce an output based on the information input into them, said Sylvester Johnson, assistant vice provost for public interest technology at Virginia Tech.
However, various AI programs work based on how those systems have been trained to use data, something lawmakers need to know, Johnson said.
“AI is evolving very quickly,” he said. “If you really want the people who make policy and the legislatures at the federal or state level to be richly informed, then you need an ecosystem designed to provide some sort of concise and accurate way to inform the people about trends and changes happening in technology.
Deciding how broadly to define AI poses a significant challenge, said Jake Morabito, director of the communications and technology working group at the American Legislative Exchange Council. ALEC, a conservative public policy organization, supports free market solutions and enforcement of existing regulations that could cover diverse uses of AI.
The “light touch” approach to AI regulation would help the United States become a technology leader on the global stage, but given the fervor over ChatGPT and other systems, lawmakers at all levels should study its developments for better understanding, Morabito said.
“I just think this technology is out of the bag and we can’t put it back in the bottle,” Morabito said. “We have to understand it well. And I think there’s a lot that lawmakers can do to understand how we can maximize the benefits, mitigate the risks, and ensure that this technology is developed on our shores and not overseas.
Some experts believe lawmakers don’t need a definition to govern artificial intelligence. When it comes to an application of artificial intelligence – a specific area in which AI is used – a definition is not entirely necessary, argued Alex Engler, a fellow in governance studies at the Brookings Institution .
Instead, he said, a basic set of rules should apply to any program using automated systems, regardless of its purpose.
“Basically, you can say, ‘No matter what algorithm you use, you have to meet these criteria,'” Engler said. “Now, that doesn’t mean there’s literally no definition, it just means you don’t count some algorithms and others.”
Focusing on specific systems, such as generative AI that can create text or images, may not be a good approach, he said.
The central question, according to Engler, is: “How can we update our civil society and consumer protections so that people can still benefit in the age of the algorithm?” »
Potential damage
Legislation passed by some states in recent years has attempted to answer this question. Although Kentucky is not at the forefront — the state Legislature recently created new committees focused on technology — Sen. Whitney Westerfield, a Republican and member of the NCSL’s AI task force , said the “avalanche of bills” nationwide is because people are scared.
AI technology is not new, but now that the topic is in the spotlight, the public – and lawmakers – are starting to respond, he noted.
“When they (lawmakers) have a legislative gavel in their hand, everything is a nail,” Westerfield said. “And if there’s a story that comes up about this, that or the other, it doesn’t even necessarily have to affect their constituents, I think it just adds fuel to the fire .”
The potential harms associated with the use of artificial intelligence are creating momentum for increased regulation. For example, some AI tools can produce tangible harm by replicating human biases, resulting in decisions or actions that favor certain groups over others, said Megan Price, executive director of Human Rights Data. Analysis Group.
The nonprofit group applies data science to analyze human rights violations around the world. Price designed several methods for statistical analysis of human rights data, which helped her in her work estimating the number of deaths linked to the conflict in Syria. The organization also uses artificial intelligence in some of its own systems, she said.
The potential implications of artificial intelligence and its power have created an appropriate sense of urgency among lawmakers, Price said. And it’s crucial to assess potential harms and uses, as his team does.
“So the real question is when a mistake is made: what is the cost and who pays for it? ” she asked.
Also worth noting is a new focus on social justice in tech, Virginia Tech’s Johnson said. “Public benefit technology” is a growing movement among social justice groups that focuses on how artificial intelligence can work for the public good and public benefit.
“If there’s any reason to hope that we can actually advance our ability to regulate technology in ways that improve people’s lives and their outcomes,” Johnson said, “this (public interest technology) is the way to be continued.”
State line is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Stateline maintains its editorial independence. Contact editor Scott S. Greenberger with questions: (email protected). Follow Stateline on Facebook And Twitter.