Tech

OpenAI Is Banning Politicians From Using ChatGPT in Campaigns

HORSE LEFT BARN

“Until we know more, we don’t allow people to build applications for political campaigning and lobbying,” the company said in a Monday blog post.

A photo illustration of the OpenAI logo displayed on a mobile phone screen in front of a repeating image of the Microsoft logo.
Berke Bayur/Anadolu via Getty Images

OpenAI, the artificial intelligence lab behind wildly popular generative AI chatbot ChatGPT, won’t let its technology be used for political campaigning or lobbying until researchers can get a better handle on the possible vectors for abuse and associated ramifications.

The San Francisco-based firm said in an unsigned blog post published Monday that it wants “to make sure our technology is not used in a way that could undermine” the democratic process.

“We expect and aim for people to use our tools safely and responsibly, and elections are no different,” the post added. “We work to anticipate and prevent relevant abuse—such as misleading ‘deepfakes,’ scaled influence operations, or chatbots impersonating candidates.”

ADVERTISEMENT

With the 2024 election season underway, OpenAI said it will be “elevating accurate voting information, enforcing measured policies, and improving transparency,” underpinned by a “cross-functional effort dedicated to election work, bringing together expertise from our safety systems, threat intelligence, legal, engineering, and policy teams to quickly investigate and address potential abuse.”

The company said that it has long “been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests,” and pointed to “guardrails” in its DALL·E image-generation platform that, among other things, rejects user requests for images of “real people, including candidates.”

But as ChatGPT continues to evolve, OpenAI continues to learn more and more about how people “use or attempt to abuse our technology,” according to the blog post.

“We’re still working to understand how effective our tools might be for personalized persuasion,” OpenAI wrote. “Until we know more, we don’t allow people to build applications for political campaigning and lobbying.”

The post also acknowledged that people “want to know and trust that they are interacting with a real person, business, or government. For that reason, we don’t allow builders to create chatbots that pretend to be real people (e.g., candidates) or institutions (e.g., local government).”

With respect to the upcoming elections in 2024, OpenAI said it also will not permit anything that might “deter people from participation in democratic processes—for example, misrepresenting voting processes and qualifications (e.g., when, where, or who is eligible to vote) or that discourage voting (e.g., claiming a vote is meaningless).”

Users will be able to report violations to OpenAI, according to the post.

There has been a great deal of concern of late as to the future, unknown consequences of artificial intelligence and the various ways it can be used, as well as misused. OpenAI vows in its corporate mission statement that it will work to “ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.”

Got a tip? Send it to The Daily Beast here.