The Department of Homeland Security is moving toward the use of generative artificial intelligence by issuing new guidance on how its staff should use commercial applications of the technology and experimenting with creating its own models, said to FedScoop the department’s top IT official.
DHS is rolling out a set of policies around specific AI technologies – a process that began in September with the release of guidance on the use of facial recognition and face capture technologies and now continues with a policy governing how the department will use generative AI business models.
The note is signed and dated October 24, but was uploaded to the ministry’s website on Thursday. DHS also recently issued a privacy impact statement on commercial generative AI tools conditionally approved for use within the department, including ChatGPT, Bing Chat, Claude 2, and DALL-E2.
During an interview on FedScoop’s Daily Scoop podcastCIO Eric Hysen said DHS developed the new policy using the White House executive order on AI and the corresponding draft memo from the Office of Management and Budget on how federal agencies should implement implements the executive order “to ensure that we have a comprehensive governance approach for specific types of AI technology used across the department.”
“Our strategy has been: Because AI is a very broad technology – or set of technologies – we issue specific guidelines on different types of AI technologies,” Hysen said.
As part of the new guidance, Hysen writes that he has “determined that DHS must enable and encourage DHS personnel to responsibly use commercial products to harness the benefits of Generation AI and ensure that we continually adapt to the future of work.
The memo explains how the department will “develop and maintain a list of commercial Gen AI tools conditionally approved for use on open source information only” – such as those included in the recent privacy assessment – as well as requirements and standards safety guidelines that staff must follow. when using commercial generative AI models.
“Immediate and appropriate applications of commercial Gen AI tools to DHS activities could include
generate first drafts of documents that a human could then review, conduct and
synthesize research on open source information and develop briefing documents or
prepare for meetings and events. I have personally found these tools valuable in these uses
cases already and encourage employees to learn, identify and share other useful uses with
each other,” the note read.
At the same time, DHS has also been “experimenting” with creating its own major language models internally and with industry support, he said.
“What we’re really looking to do there is learn,” Hysen told FedScoop. “I want a portfolio of AI projects that use models from different companies that allow us to understand what the benefits of different types are, that use closed proprietary models, that use open source models, that test different ways of deploying these models. , some of which might be shared commercial cloud instances, some that we might deploy on our cloud infrastructure, and some that we might deploy internally on our own hardware. We’re really in learning mode here and looking to try a lot of different things technically, which, as I’ve talked about with other CIOs in government and the private sector, is really the mode that everyone is located.
“We want to maximize our ability to learn how to leverage these technologies to support our mission,” he said.
EDS also issued broader guidance in September governing how DHS components must acquire and use AI and machine learning technologies.
Around the same time, Secretary Alejandro Mayorkas appointed Hysen – who has been the department’s CIO since 2021 – as DHS’s first AI chief. And last April, Mayorkas launched a DHS artificial intelligence task force, which, co-chaired by Hysen, is responsible for developing policy.
As such, Hysen and DHS developed a vision for the adoption and responsible use of AI that precedes the White House order and draft OMB guidance, which will require federal agencies to appoint AI officials within 60 days once the directive is finalized. FedScoop tracks these CAIOs as they are named.
“It’s not something that started when we added this title,” Hysen said. “This is work that has been ongoing for many years: many of our agencies and offices have been using AI, data science and machine learning in their operations for many years now. But… with the explosion of interest in generative AI and other topics over the last year, we saw the need to really focus our department-wide approach.
The issuance of the executive order did not necessarily change any of that either, as DHS worked closely with the White House and anticipated its requirements.
“As you can imagine, any document as comprehensive as the Executive Order has been in the works for some time and many parts of the department have been working closely with the White House and the interagency for some time on several aspects of this. this one. So we anticipated some of these requirements,” he said.
Hysen said Mayorkas has been a driving force in steering the department toward adopting AI, instead of hesitating as some federal agencies have done.
“Early on, he was using ChatGPT and other tools, right after they were released, in his personal life, and asking me and others how we could leverage the benefits of these technologies to better empower our staff and give them what they need. their job is done,” Hysen said.
“We interact with more members of the public every day than any other federal agency. And the workload of our employees is only increasing,” he said. “And so, when it comes to our use of AI within the department, the secretary saw early on that this could be a tool that could act as a force multiplier for us and enable our agents and officers first online to spend less time doing things. routine paperwork and more time dedicated to their security missions which would ultimately improve the security of our country.