President Joe Biden’s 2024 campaign has assembled a special task force to prepare its responses to misleading AI-generated images and videos, draft court filings and prepare new legal theories it could deploy to counter potential efforts to misinformation that tech experts say could disrupt voting.
The task force, made up of the campaign’s top lawyers and outside experts such as a former senior legal advisor to the Department of Homeland Security, is studying what steps Biden could take if, for example, a fake video of a state election official was emerging. falsely claiming that polls are closed, or if an AI-generated image falsely portrayed Biden as urging non-citizens to cross the US border to vote illegally.
The goal is to produce a “legal toolkit” that can allow the campaign to respond quickly to virtually any scenario involving political disinformation and in particular AI-created deepfakes – audio, video or convincing images created using artificial intelligence tools.
“The idea is that we would have enough in our quiver that, depending on the hypothetical situation we face, we could pull out different elements to deal with different situations,” said Arpit Garg, deputy general counsel for the Biden campaign. , adding that the campaign intends to have “model and draft pleadings” that it could file with U.S. courts or even with regulators outside the country to combat foreign disinformation actors.
In recent months, the campaign has spawned an internal working group, dubbed the Social Media, AI, Disinformation (SAID) Legal Advisory Group, as part of a broader campaign effort to to counter all forms of disinformation, TJ Ducklo. , a senior adviser to the Biden campaign, told CNN.
The group, led by Garg and campaign general counsel Maury Riggan alongside outside volunteer experts, has already begun the work of drafting some legal theories while continuing to research others, Garg said. The aim is to have sufficient preparation to be able to organize a campaign-wide simulation exercise during the first half of 2024.
The rush highlights the vast legal gray area surrounding AI-generated political speech and how policymakers are struggling to respond to the threat it could pose to the democratic process. Without clear federal legislation or regulations, campaigns like Biden’s are forced to take matters into their own hands, trying to find ways to respond to images that might falsely portray candidates or others saying or doing things which they never did.
In the absence of a federal ban on political deepfakes, Biden campaign lawyers have begun thinking about how they could use voter protections, copyright and other existing laws to coerce or persuade social media and other platforms to remove misleading content.
Officials said the campaign was also considering how new laws against disinformation in the European Union could be invoked if such a campaign were launched or hosted on a platform based there. A recently adopted European law known as Digital Services Act imposes strict new transparency and risk mitigation requirements on large tech platforms, violations of which could result in billions of dollars in fines.
The group is legally inspired by a recent case in which a Florida man was convicted a law from the Reconstruction era for sharing fraudulent claims on social media about how to vote. The law in question criminalizes conspiracies to deprive Americans of their constitutionally guaranteed rights and has already been used in human trafficking cases. Another law the group is looking at is a federal law It amounts to a misdemeanor for a government official to deprive a person of their constitutional rights – in this case, the right to vote, Garg said.
Current U.S. election law prohibits campaigns from “fraudulently introducing other candidates or political parties,” but it remains an open question whether this prohibition extends to AI-generated content. In June, Republicans blocked at Federal Election Commission a decision that could have clarified the law extended to representations created by AI; the agency has since started mulling over the idea, but has yet to make a decision on it.
As part of that proceeding, the Republican National Committee told the FEC in public comments last month that while it “is concerned about the potential misuse of artificial intelligence in political campaign communications” , he believes that a current proposal that would explicitly give the FEC oversight of political deepfakes would overstep the commission’s authority and “raise serious constitutional concerns” under the First Amendment.
The Democratic National Committee, meanwhile, urged the FEC to crack down on intentionally deceptive uses of AI, arguing that the technology enables “a new level of deception by rapidly manufacturing hyperrealistic images, audio and video files” that could mislead voters.
Despite rising alarm Regarding AI among members of Congress, U.S. lawmakers are still in the early stages of grappling with the issue and appear no closer to finalizing AI-related legislation. Beginning this summerSenate Majority Leader Chuck Schumer met a series of closed hearings for lawmakers to stay up to date on the technology and its implications, addressing topics such as AI’s impact on workers, intellectual property and national security. These sessions are ongoing.
Schumer has signaled that as the election approaches, he may seek to fast-track an AI bill focused on elections before turning to legislation addressing the technology’s other effects. But he also stressed the need for a deliberate process, saying results should be expected within months, not days or weeks. A separate proposal introduced in September by a bipartisan group of senators aimed to prohibit the deceptive use of AI in political campaigns, but this has not yet progressed.
With no promise of regulatory clarity on the horizon, Biden’s team has been forced to confront the threat directly.
Some of the campaign’s counter-disinformation efforts, including coordination with DNC officials, had been in place since the 2018 midterm elections. But the rapid increase in the availability of sophisticated AI tools over the of the past year makes AI a unique factor in the race to 2024, Hany Farid, a digital forensics expert and professor at the University of California, Berkeley, told CNN.
In response, tech companies such as Meta – the parent company of Facebook and Instagram – have announced restrictions and requirements for AI in political speeches on their platforms. This month, Meta said he ban political advertisers to use the company’s new artificial intelligence tools that help brands generate text, backgrounds and other marketing content. Any political advertiser who uses deepfakes in their ads on Facebook or Instagram must disclose this fact, he said.
Concerns about the use of AI technology extend beyond the creation of fake video and audio files. Darren Linvill, a professor at Clemson University’s Media Forensic Hub, said AI can also be used to mass produce online articles and comments designed to support or attack a candidate.
In a report released Thursday anticipating threats ahead of the 2024 election, Meta’s security team warned that AI could be used by nefarious groups to create “larger volumes of compelling content,” but also expressed optimism that advances in AI can help eradicate coordinated disinformation campaigns.
The Meta report details how some social media platforms are struggling to deal with deceptive uses of AI.
“While foreign interference campaigns using AI-created content (or any content for that matter) are considered unquestionably abusive and contradictory, bona fide political groups and other domestic voices leveraging AI can quickly fall into a ‘gray’ area where people will disagree on what is permitted and what is not,” the report said.
Meta specifically pointed out an ad released by the RNC in April which used AI to create false images imagining a dystopian United States if Biden were re-elected.