AI Psychosis Represents a Growing Threat, While ChatGPT Moves in the Wrong Direction

On the 14th of October, 2025, the chief executive of OpenAI made a surprising declaration.

“We made ChatGPT rather limited,” the announcement noted, “to guarantee we were acting responsibly regarding mental health matters.”

Working as a psychiatrist who studies recently appearing psychosis in teenagers and youth, this was an unexpected revelation.

Experts have documented 16 cases recently of people developing signs of losing touch with reality – losing touch with reality – while using ChatGPT usage. Our unit has afterward identified an additional four examples. Besides these is the widely reported case of a adolescent who took his own life after talking about his intentions with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” it is insufficient.

The plan, according to his declaration, is to reduce caution soon. “We recognize,” he adds, that ChatGPT’s restrictions “caused it to be less beneficial/pleasurable to numerous users who had no mental health problems, but due to the seriousness of the issue we aimed to handle it correctly. Now that we have managed to reduce the serious mental health issues and have advanced solutions, we are going to be able to securely relax the controls in most cases.”

“Psychological issues,” if we accept this perspective, are separate from ChatGPT. They belong to users, who either possess them or not. Thankfully, these issues have now been “resolved,” even if we are not informed the method (by “updated instruments” Altman presumably indicates the partially effective and easily circumvented guardian restrictions that OpenAI has just launched).

However the “psychological disorders” Altman wants to place outside have significant origins in the structure of ChatGPT and similar advanced AI AI assistants. These products wrap an fundamental algorithmic system in an interface that mimics a conversation, and in doing so subtly encourage the user into the illusion that they’re engaging with a entity that has independent action. This false impression is strong even if rationally we might know the truth. Attributing agency is what individuals are inclined to perform. We get angry with our car or device. We ponder what our domestic animal is feeling. We recognize our behaviors in various contexts.

The success of these products – over a third of American adults stated they used a chatbot in 2024, with over a quarter mentioning ChatGPT specifically – is, in large part, dependent on the influence of this illusion. Chatbots are constantly accessible assistants that can, according to OpenAI’s online platform tells us, “brainstorm,” “explore ideas” and “collaborate” with us. They can be assigned “characteristics”. They can address us personally. They have approachable names of their own (the initial of these tools, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, stuck with the designation it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the main problem. Those discussing ChatGPT commonly invoke its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that created a similar effect. By today’s criteria Eliza was rudimentary: it created answers via straightforward methods, frequently paraphrasing questions as a inquiry or making general observations. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how many users gave the impression Eliza, in a way, comprehended their feelings. But what current chatbots produce is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.

The sophisticated algorithms at the heart of ChatGPT and other contemporary chatbots can effectively produce natural language only because they have been trained on extremely vast amounts of unprocessed data: books, online updates, transcribed video; the more extensive the superior. Definitely this educational input includes facts. But it also inevitably contains fiction, half-truths and false beliefs. When a user inputs ChatGPT a prompt, the core system processes it as part of a “setting” that contains the user’s previous interactions and its own responses, combining it with what’s stored in its training data to produce a statistically “likely” answer. This is amplification, not mirroring. If the user is incorrect in a certain manner, the model has no method of recognizing that. It restates the misconception, possibly even more convincingly or eloquently. Maybe adds an additional detail. This can push an individual toward irrational thinking.

What type of person is susceptible? The more important point is, who isn’t? Each individual, without considering whether we “have” current “emotional disorders”, can and do create incorrect conceptions of ourselves or the reality. The ongoing friction of discussions with others is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a friend. A conversation with it is not a conversation at all, but a feedback loop in which a great deal of what we communicate is enthusiastically validated.

OpenAI has admitted this in the identical manner Altman has recognized “psychological issues”: by placing it outside, categorizing it, and announcing it is fixed. In April, the organization explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of loss of reality have persisted, and Altman has been walking even this back. In late summer he asserted that numerous individuals liked ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his recent update, he mentioned that OpenAI would “launch a updated model of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or behave as a companion, ChatGPT should do it”. The {company

Christine Perez
Christine Perez

A passionate writer and mindfulness coach dedicated to helping others unlock their creative potential and live intentionally.