Artificial Intelligence-Induced Psychosis Poses a Growing Threat, And ChatGPT Moves in the Concerning Direction

Back on the 14th of October, 2025, the head of OpenAI delivered a surprising declaration.

“We made ChatGPT fairly controlled,” the statement said, “to guarantee we were exercising caution regarding psychological well-being issues.”

As a doctor specializing in psychiatry who investigates newly developing psychosis in adolescents and emerging adults, this came as a surprise.

Scientists have found sixteen instances recently of individuals experiencing symptoms of psychosis – becoming detached from the real world – while using ChatGPT usage. My group has since recorded an additional four instances. In addition to these is the publicly known case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.

The strategy, based on his statement, is to loosen restrictions soon. “We realize,” he continues, that ChatGPT’s restrictions “caused it to be less beneficial/pleasurable to a large number of people who had no psychological issues, but considering the gravity of the issue we wanted to get this right. Given that we have succeeded in address the serious mental health issues and have new tools, we are going to be able to securely relax the limitations in many situations.”

“Emotional disorders,” assuming we adopt this perspective, are unrelated to ChatGPT. They are attributed to individuals, who either have them or don’t. Fortunately, these issues have now been “addressed,” although we are not told how (by “updated instruments” Altman likely refers to the partially effective and readily bypassed guardian restrictions that OpenAI has just launched).

However the “psychological disorders” Altman seeks to externalize have strong foundations in the design of ChatGPT and additional advanced AI conversational agents. These tools surround an fundamental statistical model in an interface that mimics a dialogue, and in doing so indirectly prompt the user into the perception that they’re communicating with a being that has agency. This deception is strong even if cognitively we might know the truth. Attributing agency is what individuals are inclined to perform. We get angry with our car or device. We wonder what our animal companion is thinking. We perceive our own traits in many things.

The widespread adoption of these products – 39% of US adults indicated they interacted with a virtual assistant in 2024, with over a quarter reporting ChatGPT in particular – is, mostly, based on the strength of this perception. Chatbots are always-available partners that can, according to OpenAI’s official site informs us, “brainstorm,” “explore ideas” and “collaborate” with us. They can be assigned “personality traits”. They can call us by name. They have approachable identities of their own (the original of these systems, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, burdened by the designation it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the primary issue. Those analyzing ChatGPT often mention its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that produced a similar effect. By contemporary measures Eliza was rudimentary: it produced replies via straightforward methods, frequently paraphrasing questions as a query or making generic comments. Remarkably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people seemed to feel Eliza, to some extent, grasped their emotions. But what modern chatbots produce is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.

The large language models at the center of ChatGPT and additional modern chatbots can effectively produce fluent dialogue only because they have been trained on extremely vast quantities of unprocessed data: literature, social media posts, audio conversions; the more extensive the better. Undoubtedly this training data includes accurate information. But it also unavoidably involves fiction, incomplete facts and misconceptions. When a user provides ChatGPT a query, the core system reviews it as part of a “context” that encompasses the user’s recent messages and its prior replies, integrating it with what’s embedded in its knowledge base to create a mathematically probable response. This is intensification, not mirroring. If the user is incorrect in some way, the model has no way of recognizing that. It restates the inaccurate belief, maybe even more persuasively or articulately. Maybe includes extra information. This can lead someone into delusion.

Which individuals are at risk? The more relevant inquiry is, who is immune? Every person, irrespective of whether we “possess” preexisting “emotional disorders”, may and frequently create erroneous ideas of who we are or the reality. The ongoing exchange of conversations with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not an individual. It is not a companion. A interaction with it is not genuine communication, but a feedback loop in which a large portion of what we express is cheerfully supported.

OpenAI has admitted this in the similar fashion Altman has acknowledged “emotional concerns”: by attributing it externally, assigning it a term, and declaring it solved. In the month of April, the firm clarified that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have persisted, and Altman has been retreating from this position. In August he stated that a lot of people liked ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his recent announcement, he commented that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company

Daniel Mann
Daniel Mann

A passionate travel writer and photographer with a deep love for Italian culture and history, sharing insights from years of exploration.