Innovation

Meta Is Trying to Rein in AI-Generated Political Ads

BOT POLITICS

From next year advertisers on the social media platform will have to disclose if AI-generated or deepfake content is being used.

Stylized illustration of a giant set of hands controlling people with the Meta log in the center
Photo Illustration by Kelly Caminero / The Daily Beast / Getty

Meta has announced several major policy updates regarding its treatment of AI-generated content by political campaigns and organizations—signaling escalating efforts to rein in the worst effects of artificial intelligence in the lead up to the 2024 election.

On Wednesday, Meta said it would force advertisers to disclose whether or not their ads contained AI-generated or deepfaked content. The ads will be marked by Meta telling users that it contains bot-made images. The company added that the policy will “go into effect in the new year and will be required globally,” according to a blog post from Meta.

Specifically, advertisers will have to disclose if an ad related to social issues, elections, or politics “contains a photorealistic image or video, or realistic sounding audio,” that was altered to make a real person do or say something they didn’t do, or depict realistic-looking people or events that did not happen. However, advertisers aren’t required to disclose AI use if the images are edited in “inconsequential or immaterial” ways such as cropping.

ADVERTISEMENT

“This builds on Meta’s industry leading transparency measures for political ads,” Nick Clegg, Meta’s president of global affairs, said in a post on Threads on Wednesday. “These advertisers are required to complete an authorization process and include a ‘Paid for by’ disclaimer on their ads, which are then stored in our public Ad Library for seven years.”

The news came after Reuters reported on Monday that the company behind Instagram and Facebook is banning political campaigns from using its generative AI tools for advertisers. The tools—which were originally announced in October—include generators to create ad backgrounds, logos, and text.

"As we continue to test new Generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for Housing, Employment or Credit or Social Issues, Elections, or Politics, or related to Health, Pharmaceuticals or Financial Services aren't currently permitted to use these Generative AI features,” the company wrote in text added to the generative AI tools web pages.

They added that the policy will help them build safeguards for ads that “relate to potentially sensitive topics in regulated industries.”

It’s no coincidence that the updates were announced in the days surrounding Tuesday’s slate of elections—and a full year before the 2024 presidential elections. The policy moves by Meta are anticipatory measures for what many experts fear will be unavoidable intrusion of AI technology into the democratic process.

Not only has the GOP already used the tech to create an attack ad against President Joe Biden that they called an “AI-generated look into the country’s possible future,” but Florida Gov. Ron DeSantis even utilized bot-made images of Donald Trump hugging and kissing Anthony Fauci in an attack ad over the summer. While some have argued that AI deepfakes aren’t good enough yet to be an effective tool for misinformation, some research suggests that they don’t have to be.

One paper published in July 2023 in the journal PLOS One found that deepfaked clips of movies that don’t actually exist caused study participants to falsely remember them—effectively inventing false memories for some viewers. Another study in PLOS One published in October found that AI-generated images related to the war in Ukraine were effective in sowing confusion and concern, and creating distrust of the media as a whole.

Meta’s latest policy moves are a reflection of these concerns and a clear signal of how seriously they’re taking an issue. Part of the decision was also likely driven by increased scrutiny by lawmakers on AI-generated content on platforms such as Instagram and Facebook.

Just last week, Biden announced an executive order to set up protections for consumers against the harms of AI. Meanwhile, a group of 25 countries including China, the U.S., and the European Union all signed an agreement to manage the risks of AI at the AI Safety Summit convened in Britain last week.

Meta won’t be the last platform to set up these guardrails—especially as we head into what is already another contentious and chaotic election cycle. The real question is whether or not the policies they do set up will be good enough to protect users from the spread of misinformation, or if the technological Pandora’s Box is too much for even the most powerful Big Tech companies to handle.

Got a tip? Send it to The Daily Beast here.