Are you trying generative AI now, or waiting to see what happens when others do it first?
That’s the question many government leaders are tasked with answering, and so far there doesn’t appear to be any rift tactics between state or local governments.
The topic has generated a lot of buzz among lawmakers. According to National Conference of State LegislaturesAt least 25 states, Puerto Rico, and the District of Columbia have all introduced bills on artificial intelligence in 2023, and 15 states and Puerto Rico have adopted resolutions or enacted laws.
A Government Technology Analysis of state and local government AI strategies revealed some trends.
GOVERNORS USE EXECUTIVE POWERS TO IMPORT AI POLICY
Since August, the governors of California, Virginia, Wisconsin, Oklahoma, Pennsylvania and New Jersey have announced executive orders focused on AI exploration.
Circumventing lawmakers is a move usually reserved for public health emergencies or disasters. However, in this case, most used their gubernatorial powers to demand that the state create a task force to harness AI technology and make recommendations for ethical use of AI.
LOCAL GOVERNMENTS SET THEIR OWN AI RULES
The governments of Seattle, New York, San Jose, Calif., and Santa Cruz County have all issued independent policies or guidelines on how their employees should use AI in the workplace. These frameworks focus on the responsible use of AI, while avoiding sharing sensitive information and introducing risks that could compromise government operations or lead to unintended negative consequences for constituents.
The majority of agencies that have adopted their own policies are located in locations that had not yet created statewide mandates or guidelines at the time.
SOME AGENCIES TAKE A CONSERVATIVE APPROACH TOWARDS AI
While many states have created task forces and research groups to study AI and expand its use in ethical government functions, at least one state is taking a “wait and see” approach that prevents employees from experimenting with AI at the moment. work.
In June, Maine Information Technology (MaineIT) led all state executive branch agencies not using generative AI for at least six months on any device connected to the state network. The ban does not include any chatbot technology currently approved for use by MaineIT, and instead focuses on ChatGPT and any other software that generates images, music, computer code, voice simulations, and artwork.
According to the moratorium, “this will enable a comprehensive risk assessment to be conducted, as well as the development of responsible policies and frameworks governing the potential use of this technology.”
North Dakota was one of the first state agencies to pass AI legislation earlier this year, but the law differs from what other states have experimented with since then. North Dakota’s emergency measure states that AI is not a person.
A handful of states have attempted to introduce new laws focused on the use of AI by government agencies, but their plans have not yet been finalized or implemented. Several bills that would have created AI task forces or research groups have not gotten much further than their initial introduction in Parliament.
window.fbAsyncInit = function() { FB.init({
appId : '314190606794339',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){ var js, fjs = d.getElementsByTagName(s)(0); if (d.getElementById(id)) {return;} js = d.createElement(s); js.id = id; js.src = "https://connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));