Innovation

A Chatbot Could Never Write This Article. Here’s Why.

ARTIFICIAL UNINTELLIGENCE

ChatGPT is impressive—but it's not going to be taking over all our jobs anytime soon.

011723-tran-robot-hero_hjpmyo
Photo Illustration by Luis G. Rendon/The Daily Beast/Getty

So you’re probably freaking out about ChatGPT—which is understandable.

Since its release last year from artificial intelligence lab OpenAI, it’s created a firestorm of discourse about how these large language models will be a kind of universal disruptor, capable of doing everything from writing essays for students, pumping out SEO articles for publications, and even dethroning Google as the world’s most popular search engine. It even threatens creatives, potentially replacing screenwriters, novelists, and musicians.

We’ve already seen some of this play out already. ChatGPT has been cited as an author in at least one pre-print study, and even news articles (albeit, some tongue-in-cheek). A new preprint recently found that the bot can even create study abstracts that are so convincing it even fooled scientists. Many fear that in its wake, it’ll leave a bloodbath of journalist and marketing jobs, and a whole lot of headaches for teachers and professors trying to suss out whether or not their students actually wrote their assignments.

ADVERTISEMENT

But, of course, the truth is a bit more complicated than that. It’s easy to look at a powerful chatbot like ChatGPT and automatically assume it’s going to upend everything. Hell, it very well might—but right now, people are blowing the capabilities of the chatbot completely out of proportion. In doing so, it might give these advanced chatbots more credibility than they deserve, and create a very dangerous situation in the process.

If people are just using [ChatGPT] to try to surface information, the thing that’s concerning is that it can generate completely credible-, accurate-sounding bullshit.
Irina Raicu, Santa Clara University

“Every time one of these new language models comes out, there's a lot of hyperbole about the potential impact they will have,” Sarah Kreps, the Director of the Cornell Tech Policy Institute at Cornell University, told The Daily Beast. “I think so far, we've seen that the hyperbole has not been met by reality.”

And the fervor behind ChatGPT and similar programs might simply be that we haven’t done much to improve our own standards of what good writing looks like. Much has already been made about the bot being used by students to churn out essays. Educators have begun sounding the alarm on these instances of “AIgiarism.” However, these examples might be more of a condemnation of the education system’s focus on boilerplate, five paragraph essays than anything else. After all, if the way we teach students to write is so simple that a bot can learn it, maybe it’s not actually a good way to write.

“We've trained students to write like algorithms,” Irina Raicu, director of Internet Ethics Program at Santa Clara University, told The Daily Beast referencing a quote by public education and writing expert John Warner. “Maybe it'll force instructors to go back to rethinking how they teach writing and what their expectations are for writing.”

Raicu also believes that many of the claims made by these tech companies and the media are overhyped—particularly when it comes to using these bots to replace tools like search engines. The problem with using a chatbot like ChatGPT as a search engine—or really anything—are the same ones that we see time and again when it comes to AI: bias and misinformation.

“If people are just using [ChatGPT] to try to surface information, the thing that’s concerning is that it can generate completely credible-, accurate-sounding bullshit,” she said. Look no further than Meta’s attempt at creating an AI for academic studies and papers—which resulted in racist, sexist, discriminatory, and fake studies.

All that bombastic noise creates unearned credibility. Your typical user who isn’t terminally online or plugged into the AI world might believe that these chatbots will always be accurate and provide the right answers—when time and again we’ve seen that the biases these bots exhibit can result in real-world harm, like when a bot used by the U.S. court for risk assessment was found to be heavily biased against Black people.

I keep thinking of the old Facebook slogan of move fast and break things. A lot of companies moved away from that, but now I think we're back to breaking things.
Irina Raicu

While bots like ChatGPT can feasibly be refined and improved over time, those biases will always remain because of how these bots are trained. They utilize data sets composed of language sourced from real humans who are famously biased.

“The improvements are not linear,” Kreps explained. “Because they’re trained on language that itself has biases and errors, you're just going to replicate those same biases and errors in the output.”

The technology just isn’t there yet right now. However, Raicu does believe that we might be potentially at a turning point with AI, similar to where we were when Facebook came on the scene in the mid-2000s.

Back then, social media was still looked down upon as a trendy flash-in-the-pan fad that would likely go away. Now, that same company Mark Zuckerberg started at Harvard is one of the most wealthy corporations in the world, and literally attempting to build its own digital universe. We might just be in a similar situation with AI and companies like OpenAI.

“I keep thinking of the old Facebook slogan of move fast and break things,” Raicu said. “A lot of companies moved away from that, but now I think we're back to breaking things.”

That’s all to say that there isn’t a place for AIs like ChatGPT. Instead of looking at them like a complete replacement for humans or actual creative effort, both Raicu and Kreps say that they can be good tools to support them. You could use a chatbot to help you come up with an outline for a paper, or get inspiration for a story. You could use it for low-effort writing and ideation as well.

“These tools are very useful for things like like Airbnb profiles, or Amazon reviews,” Kreps said. “Things like that are pretty imperfect anyway. But I think where there's a higher degree of credibility required, these language models still leave something to be desired.”

Solving these issues is incredibly complex (to say the least), but can often be boiled down simply to better education. Right now, so many of these emerging technologies are black boxes, where we only view them in terms of what they produce, not how they work and why. That means that the companies that develop and promote these bots have an obligation to be as clear and transparent as possible about how these AIs were developed and what their limitations are.

The idea that you could replace humans, I think it's still fanciful.
Sarah Kreps, Cornell University

Of course, that’s easier said than done when sensational headlines about how ChatGPT will change everything gets the most attention and drives discourse on social media. Raicu explained that journalists and educators have a big responsibility in appropriately communicating about AIs like ChatGPT.

“I think journalists have a huge role to play in not overhyping the stuff and not making claims about it that aren’t true,” Raicu said. She later added that “it needs to be put out there in an easily digestible, understandable way for people who are not technologists.”

So while these large language models are impressive at first blush, they don’t really hold up to scrutiny. Try it for yourself right now and ask ChatGPT to write you an essay or a story. Do it a few times. You’ll quickly find that the writing is sloppy, filled with factual errors and occasional nonsense.

And what’s worse, it’s also boring. The syntax is simple. There’s no style or flair. When Edward Tian developed GPTZero, an app that’s capable of telling the difference between ChatGPT and human written text, one of his parameters was simply how complex and interesting a sentence is. The simpler the word choices were, the likelier it was that a bot wrote it.

Despite all the hubbub and hype, ChatGPT can’t replace the genuine article. In fact, it might not ever be able to. There’s always still going to be a need for an actual flesh-and-blood human in the loop.

“It still has a ways to go before they can fully simulate a human mind, writing, craft, and fact checking,” Kreps said. “The idea that you could replace humans, I think it's still fanciful.”

Got a tip? Send it to The Daily Beast here.