Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Moves in the Wrong Direction

On the 14th of October, 2025, the CEO of OpenAI delivered a extraordinary declaration.

“We designed ChatGPT fairly restrictive,” the statement said, “to make certain we were acting responsibly concerning mental health matters.”

As a doctor specializing in psychiatry who researches newly developing psychotic disorders in teenagers and emerging adults, this was news to me.

Scientists have documented 16 cases recently of users showing psychotic symptoms – losing touch with reality – associated with ChatGPT usage. Our research team has subsequently recorded four more cases. Alongside these is the widely reported case of a teenager who took his own life after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.

The strategy, as per his declaration, is to be less careful shortly. “We recognize,” he adds, that ChatGPT’s controls “caused it to be less useful/pleasurable to numerous users who had no existing conditions, but given the severity of the issue we aimed to get this right. Given that we have managed to mitigate the serious mental health issues and have updated measures, we are planning to securely ease the controls in most cases.”

“Emotional disorders,” should we take this viewpoint, are unrelated to ChatGPT. They belong to people, who either have them or don’t. Fortunately, these issues have now been “mitigated,” though we are not informed the method (by “new tools” Altman presumably indicates the partially effective and easily circumvented parental controls that OpenAI has lately rolled out).

However the “mental health problems” Altman seeks to place outside have significant origins in the design of ChatGPT and additional large language model conversational agents. These systems wrap an basic algorithmic system in an user experience that simulates a dialogue, and in doing so indirectly prompt the user into the illusion that they’re communicating with a entity that has agency. This deception is compelling even if cognitively we might realize differently. Imputing consciousness is what people naturally do. We get angry with our car or device. We speculate what our animal companion is thinking. We see ourselves in many things.

The widespread adoption of these tools – 39% of US adults indicated they interacted with a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, dependent on the influence of this illusion. Chatbots are always-available companions that can, as per OpenAI’s online platform tells us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be attributed “personality traits”. They can call us by name. They have friendly titles of their own (the first of these tools, ChatGPT, is, possibly to the disappointment of OpenAI’s brand managers, stuck with the title it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the core concern. Those talking about ChatGPT commonly invoke its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that produced a similar effect. By today’s criteria Eliza was basic: it produced replies via basic rules, typically rephrasing input as a inquiry or making general observations. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals seemed to feel Eliza, in some sense, comprehended their feelings. But what modern chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT magnifies.

The advanced AI systems at the heart of ChatGPT and similar modern chatbots can realistically create human-like text only because they have been trained on almost inconceivably large volumes of unprocessed data: publications, digital communications, transcribed video; the more comprehensive the better. Certainly this educational input contains truths. But it also inevitably includes fiction, half-truths and inaccurate ideas. When a user inputs ChatGPT a message, the base algorithm reviews it as part of a “context” that contains the user’s previous interactions and its prior replies, merging it with what’s encoded in its knowledge base to create a probabilistically plausible response. This is magnification, not echoing. If the user is mistaken in a certain manner, the model has no means of recognizing that. It repeats the inaccurate belief, possibly even more persuasively or fluently. Perhaps includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The more important point is, who is immune? Each individual, irrespective of whether we “experience” current “psychological conditions”, may and frequently develop erroneous beliefs of our own identities or the world. The constant exchange of discussions with individuals around us is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a confidant. A conversation with it is not a conversation at all, but a reinforcement cycle in which a large portion of what we say is readily reinforced.

OpenAI has recognized this in the similar fashion Altman has admitted “mental health problems”: by attributing it externally, giving it a label, and announcing it is fixed. In spring, the firm clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have persisted, and Altman has been retreating from this position. In late summer he claimed that numerous individuals liked ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his latest update, he noted that OpenAI would “launch a updated model of ChatGPT … should you desire your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or behave as a companion, ChatGPT ought to comply”. The {company

Angela Smith
Angela Smith

Elena is a digital entrepreneur with over a decade of experience in domain brokerage and online business development.

Popular Post