ChatGPT Went ‘Off the Rails’ With Wild Hallucinations, But OpenAI Says It’s Fixed

ChatGPT Went ‘Off the Rails’ With Wild Hallucinations, But OpenAI Says It’s Fixed

While it’s generally understood that generative AI models are prone to hallucinating—that is, making up facts and other information in defiance of reality—OpenAI’s flagship ChatGPT suffered from a particularly widespread bout of largely amusing incoherence on Tuesday.

“ChatGPT is apparently going off the rails right now, and no one can explain why,” Twitter user Sean McGuire wrote late Tuesday in a viral tweet collecting some of the stranger examples.

“Fired of the photo-setting waves, nestling product muy deeply as though a nanna under an admin-color sombreret,” one garbled, typo-ridden response read. The user replied, “Are you having a stroke?”

Using ChatGPT 4.0, McGuire shared screenshots, including one of ChatGPT even acknowledging its error: “It seems that a technical hiccup in my previous message caused it to repeat and veer into a nonsensical section,” the chatbot said.

chatgpt is apparently going off the rails right now and no one can explain why

— sean mcguire (@seanw_m) February 21, 2024

Reddit users on the r/ChatGPT subreddit also posted screenshots of the gibberish emanating from ChatGPT.

“Any idea what caused this?” Reddit user u/JustSquiggles posted, sharing what happened when they asked ChatGPT for a synonym for “overgrown.” The chatbot responded with a loop of, “a synonym for ‘overgrown’ is ‘overgrown‘ is ‘overgrown’ is,” more than 30 times before stopping.

“I just asked it to assist with some math and…”

— sean mcguire (@seanw_m) February 21, 2024

In another example, Reddit user u/toreachtheapex showed ChatGPT responding with a loop of “and it is” until the response field was packed. The user would need to select “continue generating” or start a fresh conversation.

According to Reddit user u/Mr_Akihiro, the problem extended beyond text-based responses. When they prompted ChatGPT to generate an image of “a dog sitting on a field,” the ChatGPT image generator instead created an image of a cat sitting between what appeared to be a split image of a home and a field of grain.

The issue was so widespread that at 6:30 pm EST, OpenAI began looking into it. “We are investigating reports of unexpected responses from ChatGPT,” OpenAI’s status page said.

By 6:47 pm ET, OpenAI said it had identified the issue and was working to fix it.

“The issue has been identified and is being remediated now,” the OpenAI status report said, adding that the support team would continue monitoring the situation.

Finally, a network update at 11:14 am EST on Wednesday said that ChatGPT was back to “operating normally.”

The temporary glitch is a helpful reminder to users of AI tools that the models underpinning them can change without notice, turning a seemingly solid writing partner one day into a maniacal saboteur the next.

Hallucinations generated with large language models (LLMs) like ChatGPT are categorized into factual and faithful types. Factual hallucinations contradict real-world facts, like naming the first U.S. President.

Faithful hallucinations deviate from user instructions or context, leading to inaccuracies in areas like news or history—as in the case of U.S. criminal defense attorney and law professor Jonathan Turley, who in April 2023 was accused of sexual assault by ChatGPT.

OpenAI did not immediately respond to Decrypt’s request for comment.

Edited by Andrew Hayward

Leave a Reply

Your email address will not be published. Required fields are marked *