A lightbulb went off in Som Biswasâ head the first time he learned about ChatGPT. A radiologist at the University of Tennessee Health Science Center, Biswas came across an article about OpenAIâs chatbot on the web when it was released in November 2022. While the world at large was still coming to terms with the seismic implications of the technology, Biswas realized he could use it to make at least one facet of his career a whole lot easier.
âIâm a researcher and I publish articles on a regular basis,â Biswas told The Daily Beast. âThose two things linked up in my brain: If ChatGPT can be used to write stories and jokes, why not use it for research or publication for serious articles?â
He needed a proof of concept, so Biswas had the bot write an article about a topic he was already very familiar with: medical writing. After trial and error, Biswas was able to create an article by prompting ChatGPT section by section. When he finished, he submitted the paper to Radiology, a monthly peer-reviewed journal from the Radiological Society of North America. âAt the end, I told the editor, âAll that you read was written by AI,â so that sort of impressed them a lot,â he said.
A few days later, the paper, âChatGPT and the Future of Medical Writing,â was published in Radiology after undergoing peer-review, according to Biswas. After it was up, he felt he was on to something. ChatGPT could be used for more than just fooling around with creative projects. He could actually use it to help his career and research.
What Biswas is doing isnât necessarily unique. Since the release of ChatGPT, academics and researchers just like Biswas have been using large language models (LLMs) as a tool to help them with their own writing and research processâand occasionally, generating papers out of whole cloth using the bots. While theyâve been helpful in this way, theyâve also created a sea change in the scientific community that has many experts worried about the erosion of credibility in academic publishing.
Since his first article, Biswas has used OpenAIâs chatbot to write at least 16 papers in four months, and published five articles in four different journals. The latest was published as a commentary in the journal Pediatric Radiology on April 28. In it, Biswas is listed as the sole author, with an acknowledgement at the end that ChatGPT wrote the article and he edited it.
However, by Biswasâ own admission, the papers he generates arenât limited to topics within his radiology expertise. In fact, heâs used the bot to write papers on the role of ChatGPT in the military, education, agriculture, social media, insurance, law, and microbiology. Heâs successfully had these published in journals specializing in different niche disciplines, including a paper on computer programming in the Mesopotamian Journal of Computer Science, and two letters to the editor on global warming and public health in the Annals of Biomedical Engineering.
A year ago, this type of output might have seemed completely unrealistic. Papers take dozens if not hundreds of hours of research before even putting word to the page. Researchers in the sciences might publish a few papers per year at the most. And itâs quite rare for someone to dive into writing on a topic outside of their lifeâs work.
However, those using ChatGPT to produce papers have already beaten their output in previous years by orders of magnitudeâBiswas being one of them. And his motivation goes beyond just seeing his byline. As he told The Daily Beast, he wants to be an evangelist for a piece of emerging technology that he believes is going to change the way all researchers do their work forever.
âHealth care is going to change. Writing is going to change. Research is going to change,â Biswas said. âIâm just trying to publish now and show it so people can know about it and explore more.â
âPeople Are Getting Silly.â
The release of ChatGPT initiated a groundswell of concern about how the LLM would upend industries and practices like copywriting, journalism, student essays, and even comedy writing. The world of academia and scientific literature also prepared itself for a coming upheavalâone that is arguably much more drastic than previously anticipated.
âThereâs been a really dramatic uptick in the different articles that we've been getting,â Stefan Duma, a professor of engineering at Virginia Tech, told The Daily Beast. Duma is the editor-in-chief of the Annals of Biomedical Engineering. In the past few months, he said he has seen an exponential increase in the number of different papers submitted for publication in his journalâincluding the two he published from Biswas in the letters to the editor section.
âThe number of [letters to the editor] submissions went from practically zero to probably two or three a week nowâso maybe a dozen a month,â he said. âThis is astronomically large, because we usually might only get one or two letters about anything per month. Now we get more than 10 just about ChatGPT which is a big increase.â
Letters to the editor, explained Duma, are basically a journalâs opinion section. There are fewer restrictions on the kind of writing and depth of research needed to publish pieces here. Thatâs why Duma was willing to publish Biswas articles on global warming and public health in the section.
However, he added that heâs been rejecting a lot more articles generated by ChatGPT and other LLMs due to their low quality.
âPeople are getting silly with them,â he said. âPeople will send me 10 of the same letter with one word changed. We try to make sure that thereâs some uniqueness about some of these things. But itâs not a full peer review. People are free to kind of write whatever they want in these letters to the editor. So we have rejected some if it doesnât add anything novel at all, and itâs just sort of repetitive.â
(Mesopotamian Journal of Computer Science and Radiology did not respond to requests for comment from The Daily Beast.)
Journal editors like Dumaâs arenât the only ones who have noticed the impact that ChatGPT has had on the academic world. The AI boom has created an entirely new landscape for researchers to navigateâand itâs only becoming harder as these tools proliferate and become more sophisticated.
Elisabeth Bik, microbiologist and science integrity expert, told The Daily Beast that sheâs in two minds about the use of LLMs in academia. On the one hand, she acknowledged that it could become an invaluable tool for researchers whose first language isnât English who could use it to construct coherent sentences and paragraphs.
On the other hand, she has also been following the uptick of researchers who have been plainly abusing the chatbot to churn out dozens of articles in the past few months alone. She claimed that many of these âauthorsâ have also not been acknowledging the fact that they used ChatGPT or other models to help generate the articles either.
âAt least [Biswas] is acknowledging that heâs using ChatGPT, so you have to give him some credit,â Bik said. âThereâs a bunch of others Iâve already come across who also have published enormous and unbelievable amounts of papers while also not acknowledging ChatGPT. These people just published way too much. Like, thatâs just not realistically possible.â
The reason, Bik explained, is simple: âCitations and number of publications are two of the measures where academics are measured.â The more you have, the more legitimate and experienced you might seem in the eyes of academic institutions and scientific organizations. âSo if you find an artificial way to crank up these things, it feels like itâs unfair because now heâs going to win all the performance measures.â
The increased use of ChatGPT is also a bleak reflection on the expectations put on researchers in the academic world. âGiven the truly crushing pressure to publish, I think academics are going to start relying on ChatGPT to automate some of the more boring parts of writing,â Brett Karlan, a postdoctoral fellow in AI ethics at Stanford University, told The Daily Beast in an email. âAnd it would be very likely that the same people who churn out barely publishable papers and send them off to predatory journals are going to figure out workflows that automate this with ChatGPT.â
Bik is also concerned that the proliferation of LLMs will only bolster so-called paper mills, a term in research that describes black market organizations that undermine traditional academic research by producing fraudulent scientific papers that resemble genuine research and selling authorship on legitimate studies. Scholarly papers produced by paper mills are often heavily plagiarized and reuse data and assets. âYou can imagine a person who is a good prompt writer who can just crank out one paper a minute, and then sell papers to authors who need them,â Bik said.
So while it could provide a very useful tool to some academics like Biswas hopes, ChatGPT and other LLMs create a sort of perfect storm of ease and efficiency that could allow bad actors to take advantage of an academic publishing industry that is, so far, unprepared to meet these challenges.
An Academic Game Changer
The issues facing academia and research publishing today are the exact same ones that numerous industries like media and journalism must contend with when it comes to these advanced chatbots: the erosion of credibility and the potential for harm.
LLMs and AI more broadly have a long and sordid history with bias, which has resulted in numerous reported instances of harm via racism and sexism. Chatbots like ChatGPT are no exception. For example, in the first few days of its release, users were reporting instances in which OpenAIâs LLM was doing things like telling users that only white males make good scientists and that a childâs life shouldnât be saved if they were an African American boy.
Bias has become a perennial problem with AI. Even as we see the technology become more and more sophisticated, biases seem to always remain. These bots are trained off of massive datasets derived from humansâbiased, racist, sexist, misogynistic humansâthat can show up in the final product no matter how many filters and guardrails AI developers attempt to put in place.
Academic journals are attempting to keep up with the breakneck pace of these emerging technologies that seem to evolve and grow more powerful by the minute. Duma told The Daily Beast that his journal Annals of Biomedical Engineering had recently enacted a new policy to forbid LLMs to be listed as co-authors, and not allow such papers to be published as regular research articles.
âAuthorship is very serious and itâs something that we take very seriously,â Duma said. âSo anytime we have a paper, the authors have to sign that theyâve contributed substantially to the paper. Thatâs something ChatGPT canât be a part of. ChatGPT cannot be an author.â
However, he acknowledged that these tools are here to stay. To say otherwise wouldnât just be ignorantâit might even potentially be dangerous as it wouldnât allow the industry to adapt accordingly. âI think people need to put their seatbelt on and get ready for it,â Duma said. âItâs here and itâs going to be a part of our lives, and probably just going to increasingly be a part as we move forward.â
Meanwhile, Biswas plans to continue using ChatGPT to help his writing process. Heâs especially excited about the release of the latest version of ChatGPT and its new features, particularly its multimodal capabilities. This is the modelâs ability to understand images as well as text inputsâsomething that he said is going to represent another turning point in the relationship between AI and researchers.
âImage to text is a game changer especially for radiology because images are what we do,â said Biswas. âIf thatâs going to help us, then I think Iâm going to publish some more articles that explore itâbecause if I donât do it, someone else will.â