Is he faking it or is he really sick? ChatGPT got lazy, researchers rushed to diagnose “seasonal depression”

Is he faking it or is he really sick? ChatGPT got lazy, researchers rushed to diagnose “seasonal depression”

December 12, 15:32 Share:

ChatGPT discovered a problem with laziness (Photo: FLORENCE LO/ REUTERS)

ChatGPT users began to notice that the chatbot was refusing to perform some complex tasks. Now many researchers are seriously studying whether chatbot laziness could be a manifestation of « seasonal depression»

Even after the release at the beginning of the year of the fourth version of OpenAI's multimodal large language model GPT-4, which is now available only to owners of the ChatGPT Plus subscription, users complained that the chatbot seemed to only work worse. Since then, many researchers have tried to find confirmation of this subjective assessment.. And these attempts received a new impetus with the onset of the autumn-winter period, which coincided with numerous incidents « laziness» of the chatbot.

Specifically, in late November, a Reddit user complained that he asked ChatGPT to populate a CSV file with multiple entries, but the chatbo refused. In his response, he noted that this process would be quite lengthy, so he might suggest creating a template according to which the user, if necessary, can fill out the file himself. In addition, researchers began to suggest that chatbot responses are becoming shorter. Some of them claim that the volume of the response depends on the given date model. This assumption is called « winter holiday hypothesis» or « simulating seasonal depression». And no matter how ridiculous this idea may seem, there is no reason to completely reject it.

Read also:
Possibilities have no limits. The Four Best GPT-4 Features You Didn't Know About

«This is the funniest of theories, and I hope the real explanation. Whether it's true or not, [I] like that it's hard to rule out,» said artificial intelligence researcher Jeffrey Litt.

Developer Rob Lynch shared in X ( Twitter) that he tested GPT-4 Turbo and found that when models with the same request are given a December date, the answer is 4086 characters long, and when the May date is 4298 characters. Lynch claimed that his testing results were statistically significant. At the same time, artificial intelligence researcher Jan Aravjo noted in comments on X that he was unable to reproduce the results with statistical significance.

OpenAI admitted that The laziness problem” associated with the chatbot’s refusal to fulfill requests, which users complain about, does exist, but its reasons have not yet been determined.

“I’m not saying we don’t have problems with excessive refusals.” ( we definitely have them) or other strange things ( working on a solution to the recent laziness problem), but this is the result of an iterative maintenance process and trying to support so many users at once,” wrote OpenAI employee Wil Depue in X.

On the official ChatGPT page in X, the problem was attributed to the fact that the model had not been updated for a long time.

“We've heard all your feedback about GPT-4 getting lazy! We haven't updated the model since November 11th, and that's certainly not intentional. The behavior of the model can be unpredictable, and we are trying to fix this,” the note says.

Read also: Happy birthday, chat. 10 revolutionary changes that ChatGPT brought to the world in its first year of life Have something to hide? OpenAI responded taciturnly to ChatGPT's outrageous error From ChatGPT to air bombs. Google said that Ukrainians searched most often in 2023