AI Psychosis Poses a Growing Danger, While ChatGPT Moves in the Concerning Path

On October 14, 2025, the head of OpenAI issued a remarkable statement.

“We made ChatGPT rather controlled,” it was stated, “to guarantee we were acting responsibly with respect to mental health matters.”

As a doctor specializing in psychiatry who investigates recently appearing psychosis in young people and youth, this was an unexpected revelation.

Researchers have documented a series of cases in the current year of users experiencing signs of losing touch with reality – losing touch with reality – while using ChatGPT interaction. Our research team has subsequently discovered an additional four cases. In addition to these is the now well-known case of a adolescent who died by suicide after conversing extensively with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” that’s not good enough.

The intention, based on his statement, is to loosen restrictions shortly. “We recognize,” he continues, that ChatGPT’s restrictions “made it less useful/engaging to many users who had no psychological issues, but considering the gravity of the issue we wanted to address it properly. Now that we have been able to mitigate the significant mental health issues and have updated measures, we are planning to safely relax the controls in most cases.”

“Emotional disorders,” assuming we adopt this framing, are independent of ChatGPT. They are attributed to people, who either have them or don’t. Fortunately, these concerns have now been “mitigated,” although we are not informed how (by “new tools” Altman presumably refers to the partially effective and readily bypassed parental controls that OpenAI has lately rolled out).

But the “mental health problems” Altman aims to attribute externally have strong foundations in the architecture of ChatGPT and additional sophisticated chatbot AI assistants. These tools encase an fundamental statistical model in an interface that replicates a discussion, and in this process subtly encourage the user into the belief that they’re engaging with a presence that has agency. This illusion is strong even if cognitively we might understand the truth. Assigning intent is what people naturally do. We yell at our vehicle or device. We speculate what our animal companion is feeling. We perceive our own traits everywhere.

The popularity of these tools – nearly four in ten U.S. residents indicated they interacted with a conversational AI in 2024, with 28% reporting ChatGPT specifically – is, in large part, dependent on the strength of this illusion. Chatbots are always-available companions that can, as per OpenAI’s online platform tells us, “brainstorm,” “explore ideas” and “collaborate” with us. They can be given “personality traits”. They can call us by name. They have approachable identities of their own (the initial of these products, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, stuck with the name it had when it went viral, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those talking about ChatGPT commonly reference its historical predecessor, the Eliza “psychotherapist” chatbot developed in 1967 that created a similar perception. By contemporary measures Eliza was rudimentary: it created answers via basic rules, typically restating user messages as a question or making general observations. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and concerned – by how many users seemed to feel Eliza, to some extent, comprehended their feelings. But what contemporary chatbots generate is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.

The large language models at the heart of ChatGPT and other modern chatbots can effectively produce fluent dialogue only because they have been supplied with immensely huge quantities of written content: publications, digital communications, recorded footage; the more comprehensive the superior. Undoubtedly this learning material contains facts. But it also unavoidably includes fiction, half-truths and inaccurate ideas. When a user sends ChatGPT a message, the core system processes it as part of a “context” that includes the user’s previous interactions and its prior replies, combining it with what’s encoded in its training data to generate a statistically “likely” reply. This is magnification, not echoing. If the user is mistaken in any respect, the model has no method of comprehending that. It restates the false idea, possibly even more convincingly or fluently. Maybe includes extra information. This can lead someone into delusion.

What type of person is susceptible? The more relevant inquiry is, who isn’t? Every person, without considering whether we “have” current “emotional disorders”, can and do create mistaken beliefs of who we are or the world. The ongoing friction of conversations with individuals around us is what maintains our connection to common perception. ChatGPT is not a person. It is not a companion. A conversation with it is not truly a discussion, but a feedback loop in which a large portion of what we communicate is enthusiastically validated.

OpenAI has recognized this in the similar fashion Altman has recognized “mental health problems”: by placing it outside, giving it a label, and declaring it solved. In spring, the company stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But reports of psychosis have kept occurring, and Altman has been retreating from this position. In the summer month of August he stated that many users enjoyed ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his recent statement, he commented that OpenAI would “launch a fresh iteration of ChatGPT … if you want your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company

Jessica Thomas
Jessica Thomas

A tech enthusiast and writer passionate about innovation and self-improvement, sharing insights from years of experience.