US President Joe Biden is expected to issue a new executive order on artificial intelligence in the coming weeks. Its content and scope are the subject of much speculation in Washington.
For many months, several lines of work at the White House have addressed AI policy priorities that closely track the broader policy debate in Washington. At the same time, much of the energy of this work is consumed by issues beyond the scope of operational AI governance. As in Congress, the Biden administration is concerned with issues of competitiveness – finding ways to keep America ahead in technological development – and national security.
That said, the message that responsible development of AI tools is key to an American style of innovation is increasingly echoing in the policy debate. But the question remains open whether and how this will be reflected in the next decree.
Some official guidance on the scope of the decree has begun to emerge. At a Chamber of Commerce event this week, U.S. Deputy National Security Advisor Anne Neuberger called the order “incredibly comprehensive.” According to NextGov report Of the event, Neuberger added that it was “a bridge to regulation, because it pushes the boundaries and is only within the bounds of what is permitted…by the law.” MLex reported Neuberger’s remarks focused on the importance of solving the watermarking challenge for AI-produced content to make provenance tracking easier.
At the hacker DEF CON convention In August, Arati Prabhakar, director of the White House Office of Science, Technology and Policy, told reporters that establishing federal government policies on AI had become an urgent priority for the administration. “It’s not just a normal process sped up, it’s just a completely different process,” she said.
The decree could also find ways to rely on voluntary commitments The White House received from major AI companies, which will also be one of the proposals the United States will present at the upcoming UK AI security summit, according to MLex’s report on Neuberger’s remarks .
Importantly, the administration appears aware of the global geopolitical context of AI development and appears intent on sharing the mic with allies who are also advancing new AI principles. In announcing the latest round of voluntary commitments, the White House said they were developed in consultation with 20 other listed governments. Additionally, the administration acknowledged that the commitments “complement Japan’s leadership in the G7 Hiroshima Process, the UK AI Security Summit, and India’s leadership as President of the Global Partnership on AI”.
Meanwhile, federal agencies continue to implement policies mandated by President Biden’s prior order on AI. Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, directed agencies to implement principled policies on the deployment of AI systems. The Department of Homeland Security this month announced new Strategies of its AI working group, including one on “the acquisition and use of artificial intelligence and machine learning by components of the Department of Homeland Security,” which directly responds to Executive Order 13960.
The other new DHS policy, Directive 026-11, goes further, covering the agency’s “use of facial recognition and face capture technologies.” Some highlights of the policy:
- All uses of facial recognition and face capture technologies must be thoroughly tested to ensure there is no unintended bias or disparate impact, consistent with national standards.
- Citizens must have the right to refuse facial recognition for specific non-law enforcement uses.
- It is prohibited to use facial recognition as the sole basis for any legislative measure or civil action.
- Any new use of these technologies must go through an oversight process through the Privacy Office, the Office for Civil Rights and Civil Liberties, and the Office of the Chief Information Officer.
At the same time, DHS Secretary Alejandro Mayorkas named Chief Information Officer Eric Hysen to add a title to his position as DHS’s first chief AI officer. According to Press release“Hysen will promote AI innovation and safety within the Department, while advising Secretary Mayorkas and Department leadership on AI-related issues.” The Senate Homeland Security and Governmental Affairs Committee recently approved a Invoice it would create an AI lead role in every federal agency.
Like DHS, federal government agencies appear to be considering how AI will impact their missions, whether through executive order or legislative powers. Director Prabakar’s comments to DEF CON reporters also mentioned how encouraged she was by the work of federal agencies: “They know it’s serious, they know the potential, and so their departments and agencies are really stepping up. efforts,” she said.
THE home page of the official National AI Strategy website now provides direct links to the AI homepages of 27 agencies and departments ranging from the Department of Defense’s Chief Office of Digital and Artificial Intelligence, AI.milto the Department of Education’s Office of Educational Technology, which recently published a report on AI and the future of teaching and learning.
Another major development is expected in the form of updated guidance from the Office of Management and Budget, arguably the most influential agency when it comes to coordinating and implementing intergovernmental policy initiatives.
Although previously the White House advertisement said OMB’s draft guidance would be released “this summer,” it is still not available for public comment. Once implemented, the new OMB policies will update memorandum to the heads of all executive agencies to “inform the development of regulatory and non-regulatory approaches to technologies and industry sectors that are empowered or enabled by AI and consider ways to reduce barriers to development and adoption of AI technologies”.
Advocates from all walks of life have engaged on the next decree. Many joined the Leadership Conference on Civil and Human Rights by calling on the administration to make its AI Bill of Rights binding on the entire federal government. Others, like the American Chamber of Commerceadvocated for targeted adjustments to immigration policies to attract and retain top AI talent in the United States.
Although the details remain a mystery, it is clear that the order will include a wide range of policies that extend to the limits of the executive branch’s legal powers. At least when it is not in a state of shutdown, the federal government will continue to act on AI.
HHere’s what else I’m thinking of:
- The foundation of AI efforts should be a comprehensive federal privacy law. In Remarks At a recent Global Forum event, Rep. Cathy McMorris Rodgers, R-Wash., chairwoman of the House Energy and Commerce Committee, reminded policymakers that privacy legislation would help integrate consumer protection into AI development and deployment practices. “I fear that lawmakers are losing sight of what should be the foundation of any AI effort, which is establishing comprehensive protections over the collection, transfer and storage of our data,” he said. -she declared. “It’s critical that we do this before we embark on AI legislation.” She also stressed the importance of protecting children online by passing comprehensive legislation, saying the U.S. Privacy and Data Protection Act would provide “the strongest protections for children of any federal or state law.” McMorris Rodgers said she was personally committed to “doing everything in my power to build consensus” on privacy legislation.
- Another important set of standards for the use of AI in the employment context. The workplace field continues to show that it is at the forefront of AI governance innovations, this time based on a meeting hosted by the Future of Privacy Forum. In collaboration with ADP, Indeed, LinkedIn and Workday, FPF published a report titled Best Practices for AI and Workplace Assessment Technologies. Along with other industry-driven principles, local regulations, and federal oversight, the employment context serves as a testing ground for AI best practices around privacy, non-discrimination, human oversight and transparency.
- How to protect human rights in immersive technologies? A new report from the NYU Stern Center for Business and Human Rights examines two of the most pressing issues related to the development of immersive technologies: the potential erosion of privacy, including mental privacy, and the proliferation of harmful behavior in virtual environments, including sexual harassment and abuse. children. It includes a set of privacy recommendations for extended reality platforms and policymakers.
Events to come :
- September 26: The Connected Health Initiative is hosting a conference titled AI and the Future of Digital Healthcare (National Press Club).
- September 27: The monthly Technology Policy Happy Hour will take place (Dirty Habit).
- September 27: Politico welcomes the AI and Technology Summit (hybrid).
- September 27-28: Fischer Phillips is organizing a conference entitled AI Strategies @ Work: Preparing Business Leaders for Tomorrow (Willard Intercontinental).
- September 28: The Information Technology Industry Council is organizing a conversation with OSTP Director Arati Prabhakar on Building Responsible AI (ITI).
- September 28: Public Knowledge organizes its 20th edition IP3 Price (Ronald Reagan Building).
Please send your comments, updates and draft EO text to cobun@iapp.org.