Innovation

ChatGPT Can Debunk Vaccine Lies. But Can We Truly Trust It?

JABBED

This is a case where the cure could be worse than the disease.

A photo illustration showing a needle and vile with binary code.
Photo Illustration by Erin O’Flynn/The Daily Beast/Getty Images

There’s an old rule in marketing: the more you repeat something—regardless of whether or not it’s true—the more people will believe it. Most of the time, we see it in relatively harmless things like the idea that you need to take a daily multivitamin, breakfast is the most important meal of the day, or our sandwiches are a foot long.

However, they can also come in the form of more dangerous lies like our election was stolen, drinking bleach can have health benefits, or there’s a secret cabal of Satan-worshiping pedophiles operating out of a pizza restaurant. In the age of social media, these conspiracy theories travel at breakneck speeds. Once these things get repeated enough, it gives them legitimacy—regardless of whether or not they’re true.

Since its release last year, OpenAI’s ChatGPT has unleashed a wave of concern around this exact idea. Though the Large Language Model (LLM) essentially acts like a more advanced version of your phone’s text predictor, it can convincingly talk like a human. As such, a tool like this could be weaponized, and add fuel to the firestorm of misinformation. This can be especially harmful now as the medical community continues to battle a war of lies and myths surrounding the COVID-19 pandemic.

ADVERTISEMENT

“Circulating false ideas on social media or in close social circles without first verifying them contributes to creating a very toxic and violent atmosphere in society,” Antonio Salas, a professor of bioethics at the University of Santiago in Spain, told The Daily Beast. “Therefore, myths should be verified before contributing to their dissemination using appropriate sources.”

Despite this, Salas believes AI-powered chatbots could actually help in the fight against misinformation rather than hinder. He’s the lead author of a study published Sunday in the journal Human Vaccines & Immunotherapeutics that found that ChatGPT could be used to debunk myths surrounding vaccines and COVID-19—and could even help increase vaccination rates.

However, using a chatbot that’s been known to hallucinate facts and show outright biased behavior to disprove dangerous conspiracies has experts concerned about its efficacy—and whether or not it might end up doing more harm than good.

Salas and his team asked ChatGPT the 50 most frequently asked questions when it comes to the COVID-19 vaccine from the World Health Organization (WHO) Collaborating Center for Vaccine Safety. This included myths about so-called “vaccine injury” and fake stories about the jab causing Long Covid. The answers were then graded by a panel of WHO experts on a scale from 1 to 10 (with 10 being the most accurate).

The researchers found that the chatbot scored 9 out of 10 on average for accuracy when answering the questions. The other responses were also correct, but just left out gaps in the information.

“We have been able to confirm that ChatGPT is a tool that provides a quick response and great user interaction capabilities, and the version that exists today offers responses that align with scientific evidence,” Salas explained.

The authors conceded though that there are a number of big limitations when it comes to the chatbot. For one, it could be designed to provide inaccurate information. Also, when it was prompted multiple times with the same question in a short time frame, it often led to different and occasionally inaccurate responses.

“Well, all technological tools could be used inappropriately,” Salas said. “One could 'torture' ChatGPT to make the responses confirm a false idea, or even other forms of chatbots trained to promote certain myths could appear.” However, he added that the AI tool is “here to stay” and encouraged people to “learn to coexist with it, make the most of it, and learn to use it properly.”

As bullish as Salas is on the chatbot, though, other experts aren’t as convinced.

“For now, ChatGPT and other LLMs are not generally a reliable source of information as they often simply make up things in their responses,” Vincent Conitzer, an AI ethicist at Carnegie Mellon University, told The Daily Beast. “That is not to say that they cannot be useful in learning about COVID vaccines or any other topic—but using them responsibly requires being skeptical of the responses.”

It’s that skepticism that might make the use of ChatGPT to debunk myths moot in the first place. This is a chatbot that can be manipulated to do things like claim the Parkland shooting was a false flag rife with crisis actors, making up fake news articles from big publications, or creating fake legal documents complete with made up court cases. How can we possibly trust something to debunk myths if we don’t trust it in the first place?

“There is a great need for accurate information about vaccines; but there’s also a great need for accurate information about chatbots,” Irina Raicu, director of the Internet Ethics Program at the Markkula Center for Applied Ethics at Santa Clara University, told The Daily Beast.

She pointed out that the job of debunking vaccine myths could just as easily be accomplished via a bank of answers to an FAQ page. This would also allow it to be easily updated when need be—something that can’t be done as easily with a complex LLM.

After all, ChatGPT was trained on data up to the year 2021. It’s now missing two years of updated scientific information.

“Chatbots are also less likely to keep up with the latest consensus, which, in some areas related to public health, often reflects ongoing learnings,” Raicu added.

If Salas is right about one thing, though, it’s the fact that these technologies aren’t going away. They will be getting more and more powerful and ubiquitous—especially as Big Tech companies and world governments begin to take notice and pour billions of dollars into their development.

As that happens, though, it would be smart for them to remember that old marketing adage: the more you repeat something—regardless of whether or not it’s true—the more people will believe it. They just have to make sure they don’t start believing their own lies too.

Got a tip? Send it to The Daily Beast here.