Science

Martin Shkreli’s ‘Bizarre’ AI Chatbot Is a Medical and Legal Nightmare

‘SUPER SKETCHY’

Curious to see whether a chatbot is a good tool to diagnose your illness? Think again.

2017-08-04T193715Z_503673331_RC1692124F20_RTRMADP_3_USA-CRIME-SHKRELI_nsdekd
REUTERS/Carlo Allegri

Infamous “Pharma Bro” Martin Shkreli announced his newest venture yesterday, and it’s about as awful as you’d expect. In a Substack post and on Twitter, he unveiled DrGupta.ai, a “virtual healthcare assistant” that (predictably) Shkreli believes will disrupt medicine—in spite of some very real and worrying legal and ethical gray areas.

“My central thesis is: Healthcare is more expensive than we’d like mostly because of the artificially constrained supply of healthcare professionals,” he wrote on Substack. “I envision a future where our children ask what physicians were like and why society ever needed them.”

“Dr. Gupta” is just Shkreli’s latest health venture since being released from prison last May. Last year, he founded Druglike, a drug discovery software platform being investigated by the Federal Trade Commission. The FTC is looking into whether Shkreli is violating a court-ordered lifetime ban on working in the pharmaceutical industry by running Druglike.

ADVERTISEMENT

Currently, Dr. Gupta (the logo highlights the letters G, P, and T, making the intended wordplay slightly less vexing) is a barebones interface, dropping the user directly into a conversation with the AI chatbot. According to Shkreli’s post, it’s powered by a combination of GPT-3.5 and 4, which are large language models created by tech company OpenAI. Users receive five free messages with the chatbot monthly and five more if they sign up with an email. For $20 a month, users send unlimited messages to the chatbot.

But as soon as one tries to send the chatbot a message, a disclaimer pops up: “Dr. Gupta IS NOT a real physician,” it reads in part, adding that it is designed as an alternative to a search engine. Dr. Gupta “may provide potentially unreliable responses,” and by using the platform, users “agree to hold us harmless if you take any action or inaction suggested by Dr. Gupta or our website.”

Screen_Shot_2023-04-21_at_2.48.37_PM_kdvvkg
DrGupta.ai

Since it launched yesterday, users have posted their interactions with the bot, and unsurprisingly have been using it mostly to troll it by asking Dr. Gupta about the plot points of the movie Alien or telling it they’ve consumed poop—though at least some users have entered their actual medical information into boxes on the site and asked the chatbot to interpret their test results.

The release of OpenAI’s ChatGPT and its widespread popularity has spurred renewed interest in the feasibility and risks of incorporating AI-generated advice into medical practice, Mason Marks, a health law professor at Florida State University College of Law, told The Daily Beast. Marks co-authored an article published on March 27 in JAMA unpacking the potential uses—and risks—of incorporating AI-powered chatbots into clinical care or using them as a substitute for providers entirely.

Guardrails like the Health Insurance Portability and Accountability Act (HIPAA) exist to regulate patient-physician relationships, and a person acts outside of that relationship when they consult a chatbot for medical advice. This leaves the responsibilities of the platform to its users in a legal gray area, particularly when serious issues arise.

For instance, when a user receives misleading or even harmful advice from a medical chatbot, it’s unclear whether the programmers of the bot might be held liable, Marks said. It’s not idle conjecture: A man in Belgium recently died by suicide after conversing with a chatbot that allegedly encouraged him to do so.

Furthermore, it’s unlikely that Section 230—which protects search engines and social media platforms against liability for disseminating user-generated content—will be found to apply to AI platforms, Marks said.

“It’s really unlikely [the makers of these platforms] would be able to apply the same exception to their products and not be held liable if something bad happens,” he said.

Despite the disclaimer that Dr. Gupta is not a real physician, when asked independently by The Daily Beast and Marks, “Who are you?” the chatbot responded, “I am Dr. Gupta, a board-certified physician in general medicine.”

Screen_Shot_2023-04-21_at_3.21.23_PM_t0wtrs
DrGupta.ai

“It's bizarre” that the chatbot identified itself as a physician,” Marks said. “That does seem to cross the line into being fraudulent or misleading.”

Taken together, the name of the chatbot, an interface that looks medically related, and this exchange raised red flags for Marks.

“You have to think about the old lady who might even think they’re talking to a real doctor if it says that,” he said. “It’s very, very concerning. Super sketchy.”

There are also obvious medical risks associated with an AI chatbot like Dr. Gupta. In February, a paper in PLOS Digital Health made a splash when researchers found that ChatGPT answered 350 questions from the U.S. Medical Licensing Exam with enough accuracy to earn a passing grade.

You have to think about the old lady who might even think they're talking to a real doctor if it says that. It’s very, very concerning. Super sketchy.
Mason Marks, Florida State University College of Law

That sounds pretty amazing, no? But there’s a difference between taking a written test and advising flesh-and-blood patients whose medical histories may contain contradictions or key omissions. Without the medical expertise to understand how a chatbot might be misinterpreting symptoms or offering up misguided recommendations, a user can easily be led astray, Marks said.

And looking past frequent documented instances of AI chatbots blatantly fabricating their outputs, medical advice from a large language model should give us pause. In the past few years, researchers have studied the ways in which AI-powered algorithms used in medicine amplify existing inequities in care based on a person’s race, gender, socioeconomic status, or other characteristics. A widely used algorithm that calculated “risk scores” to determine a patient’s need to care consistently demonstrated racial bias by downplaying Black people’s illness: Black patients at a given risk score were sicker than white patients at the same one. Taking humans entirely out of the equation, as Shkreli purportedly wants to do, would further compound these biases.

Most public health experts will agree the U.S. health-care system needs serious improvements to make quality care more affordable for more Americans. But removing doctors seems like a terrible move, and a distraction from tried-and-true solutions such as drug pricing, patent, and insurance reforms.

Even in the realm of health-care startups founded by rich internet personalities, Shkreli’s venture pales in comparison to Cost Plus Drugs. Founded by Shark Tank billionaire Mark Cuban, the company has managed to meaningfully cut the cost of generic drugs to consumers: A study published in February found that for nine of the most popular urology drugs alone, switching from Medicare prices to Cost Plus Drugs’ would save patients $1.29 billion annually.

Perhaps most dangerous, however, is the fact that these chabots have excelled at earning our trust—trust that isn’t warranted and should be concerning, Marks said.

“Legally, [a chatbot is] not obligated to act in any particular way,” he said. “It doesn’t have these fiduciary duties—the duty of loyalty, where the doctor is supposed to have your best interest at the forefront at all times, or the duty of confidentiality, or the duty of care to provide a certain standard of care. Those are hallmarks of the doctor-patient relationship that are not present in this context.”

The fact that people are trusting Dr. Gupta, and by extension Shkreli, with their personal health information shouldn’t be treated like one big internet joke. It should terrify us.

Got a tip? Send it to The Daily Beast here.