ChatGPT users were baffled after the chatbot began churning out completely nonsensical responses on Tuesday night—or at least, more nonsense than usual. The issue was so bad that it forced OpenAI to begin investigating the “unexpected responses from ChatGPT.”
“Has ChatGPT gone temporarily insane?” one user asked on the ChatGPT subreddit. “I was talking to it about Groq, and it started doing Shakespearean style rants.”
“It’s lost its mind,” another user wrote. “I asked it for a concise, one sentence summary of a paragraph and it gave me a [Victorian]-era epic to rival Beowulf, with nigh incomprehensible purple prose. It’s like someone just threw a thesaurus at it and said, ‘Use every word in this book.’”
ADVERTISEMENT
The responses ranged from non-sequiturs, to wrong answers, to simply repeating the same phrase over and over. While the replies varied, the issue seemed to persist with the majority of users over the course of the night. The glitch was finally resolved by OpenAI on Wednesday morning, according to the company’s status page.
OpenAI did not immediately respond when reached for comment.
The episode sparked a lot of confusion, jokes, and even fear among users. Some speculated that the LLM had collapsed entirely, while others wondered if this meant that the chatbot had become sentient. Users also joked about how the issue occurred just a day after it was announced that Reddit would be selling user data to AI companies saying, “Maybe they already parsed the data they bought from Reddit and this is the inevitable result?”
“I just asked it to implement a bug fix in JavaScript,” one user wrote on a post that included a screenshot of a deranged answer from ChatGPT. “Reading this at 2 a.m. is scary.”
OpenAI has predictably remained silent about what the issue was exactly. However, the glitch is a great example of how quickly and easily emerging technology like AI can break down. When that happens, it’s not just a matter of disrupting work if you rely on it for something like coding or writing. It can also potentially cause real-world damage by producing biased and harmful responses.
As users increasingly rely on LLMs like ChatGPT for work and life, the potential for harm becomes much more widespread and heightened. It illustrates the importance of taking everything you produce using generative AI with a grain of salt. These models quite often get things wrong—and, sometimes, they even go completely off the rails.