Innovation

The Biden Deepfake Robocall Is Just the Start of Our AI Election Hell

FAKE OUT

“Don’t leave your common sense behind.”

U.S. President Joe Biden holds a Cabinet meeting at the White House on October 02, 2023 in Washington, DC. Biden held the meeting to discuss economic legislation, artificial intelligence, and gun violence.
Kevin Dietsch / Getty Images

Voters all over New Hampshire seemingly received phone calls from President Joe Biden in the lead-up to the state’s primary on Tuesday. The call—which came from the phone number of a former New Hampshire Democratic Party chair—seemed to urge people not to vote in the upcoming primary and “save their vote” for November’s general election.

“What a bunch of malarkey,” Biden’s voice stated on the call, echoing one of the president’s oft-used chestnuts. It added, “Voting this Tuesday only enables the Republicans in their question to elect Donald Trump Again. Your vote makes a difference in November, not this Tuesday.”

Of course, the phone call and its message never came from Biden, but rather a deepfake powered by artificial intelligence to mimic the president’s voice. The effort seemed to be an attempt to disrupt a write-in campaign for Biden as the president won’t be appearing on the ballot due to the fact that New Hampshire scheduled the primary before South Carolina’s on Feb. 3—which is the first official nominating election for the Democratic Party.

ADVERTISEMENT

Kathy Sullivan, the former New Hampshire Democratic Party chair whose number was linked to the robocalls, told NBC News that she wasn’t aware of who was behind the deepfaked Biden calls, but hoped that they would be prosecuted to the fullest extent of the law. The state attorney general’s office later announced that they would be investigating the matter, decrying the deepfake as an attack on our democracy.

"These messages appear to be an unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters,” the AG said in a statement. New Hampshire voters should disregard the content of this message entirely."

The robocall underscored the pressing danger that the emerging technology poses as the nation gears up for November’s election—and how ill-prepared policymakers and the public are to its impact. While companies like OpenAI have made attempts to limit or outright restrict the ways that their tech can be used by politicians, the proliferation of AI as well as its ease of access has resulted in a situation where practically anyone can use these products to spread misinformation.

“This is not a surprise,” Dominique Shelton Leipzig, a cybersecurity expert and author of Trust: Responsible AI, Innovation, Privacy and Data Leadership, told The Daily Beast. “Criminal elements in or outside our country might wish to undermine our democracy and use [AI] to push out misinformation and encourage people to not vote, which is what we saw in New Hampshire.”

The Biden robocall is just the latest of a recent string of election deepfakes. In June 2023, former President Donald Trump posted a deepfake video on Truth Social of Florida Gov. Ron DeSantis on a Twitter Spaces call with Elon Musk, George Soros, Adolf Hitler, and the devil. Meanwhile, DeSantis utilized what might have been the first deepfaked campaign ad that contained AI-generated images of Trump embracing Anthony Fauci. He also later released an ad containing a deepfake of Trump’s voice attacking Iowa Gov. Kim Reynolds.

The Biden robocall occurred mere days after OpenAI banned a user for creating a bot with ChatGPT that mimicked Democratic presidential candidate Dean Phillips. Though its developer defended the bot saying that it could help voters better engage with Phillips’ platform, experts told The Washington Post that it could pose a serious misinformation hazard—especially since chatbots are prone to getting basic facts completely wrong.

Generative AI tech has also had implications on geopolitics as well. Disinformation groups linked to the Kremlin have used deepfakes of Ukraine’s former President Petro Poroshenko to trick foreign fighters into denigrating President Volodymyr Zelensky. Deepfake audio and videos have also been used since the early days of Russia’s invasion of Ukraine to spread misinformation and sow further chaos in the war.

Most experts including Leipzig all generally agree that the issue is going to get much worse before it gets better. This is in large part due to the fact that lawmakers have historically moved at a glacially slow pace when it comes to regulating emerging technologies like social media and AI. It’s also not helped when Big Tech executives like OpenAI’s Sam Altman seem to have outsized influence in Congress.

As November’s elections approach, Leipzig said that it’s more important than ever that candidates get ahead of the issues AI can cause on the electorate. “Candidates can’t wait for legislation to be passed in Congress or the state level to deal with these threats from individuals and nation states that potentially want to undermine our democracy,” she said. “Candidates have to take control and let voters know where they can get accurate information.”

She added that this may include things like an online database that contains accurate news, messaging, and policy points for the candidates. They also need to “speak loudly about it” so voters know exactly where they can get the most accurate information. “We need to make sure that voters are educated as soon as possible so they know that voices can possibly be spoofed using AI and sound exactly like politicians,” Leipzig explained.

Ultimately, though, Leipzig said that AI isn’t something that candidates or voters need to be afraid of. Instead, they should actually embrace the technology as a means to spread accurate and helpful information.

The idea, essentially, would be to use AI to automate tasks like campaign messaging and reaching out to voters—much like what the Dean Philips bot was intended to do. By embracing the tech in this manner, AI could be “used to empower voters to get accurate information at scale.”

“That’s the exciting thing about AI,” Leipzig explained. “We need to get savvy with these new tools and learn how to use them effectively to push out accurate election information and messages.”

Still, the danger of AI is apparent and it can have a devastating impact on our electoral process. While it can be used as a good tool for candidates to improve their messaging, it can also be used to spread misinformation at a much wider scale. As November looms and states hold their primaries, there’s no doubt that 2024 will be the year that AI really hits the election cycle—for better and for worse.

“Humans control how AI will impact our elections—not the other way around,” Leipzig said. “It’s important for our voters not to be gullible. Don’t leave your common sense behind. It’s also important for candidates to point voters to sources of correct information that speak to their messages—and using AI can make that happen.”

Got a tip? Send it to The Daily Beast here.