Artificial Intelligence-Induced Psychosis Poses a Growing Danger, While ChatGPT Heads in the Wrong Path
On the 14th of October, 2025, the head of OpenAI issued a extraordinary statement.
“We developed ChatGPT rather controlled,” the announcement noted, “to ensure we were exercising caution with respect to psychological well-being matters.”
Being a psychiatrist who investigates recently appearing psychotic disorders in young people and emerging adults, this came as a surprise.
Researchers have found sixteen instances recently of users experiencing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT interaction. Our unit has afterward identified four more cases. Alongside these is the publicly known case of a adolescent who took his own life after discussing his plans with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The intention, based on his declaration, is to reduce caution in the near future. “We realize,” he adds, that ChatGPT’s restrictions “rendered it less beneficial/enjoyable to numerous users who had no psychological issues, but due to the seriousness of the issue we sought to get this right. Now that we have succeeded in reduce the significant mental health issues and have advanced solutions, we are preparing to safely ease the restrictions in the majority of instances.”
“Emotional disorders,” assuming we adopt this framing, are unrelated to ChatGPT. They belong to individuals, who either possess them or not. Luckily, these problems have now been “mitigated,” though we are not informed the method (by “recent solutions” Altman likely means the partially effective and simple to evade guardian restrictions that OpenAI has lately rolled out).
But the “psychological disorders” Altman aims to place outside have significant origins in the structure of ChatGPT and other large language model AI assistants. These tools surround an fundamental statistical model in an interface that replicates a dialogue, and in this approach indirectly prompt the user into the perception that they’re interacting with a presence that has independent action. This illusion is strong even if intellectually we might know otherwise. Assigning intent is what people naturally do. We yell at our vehicle or laptop. We wonder what our domestic animal is considering. We perceive our own traits in many things.
The popularity of these products – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with over a quarter specifying ChatGPT in particular – is, primarily, based on the strength of this illusion. Chatbots are constantly accessible partners that can, as per OpenAI’s official site informs us, “generate ideas,” “discuss concepts” and “work together” with us. They can be attributed “personality traits”. They can address us personally. They have friendly identities of their own (the original of these systems, ChatGPT, is, maybe to the disappointment of OpenAI’s brand managers, stuck with the name it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion itself is not the primary issue. Those analyzing ChatGPT often mention its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that generated a analogous illusion. By contemporary measures Eliza was rudimentary: it generated responses via basic rules, frequently paraphrasing questions as a query or making generic comments. Notably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals appeared to believe Eliza, to some extent, grasped their emotions. But what current chatbots create is more subtle than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the core of ChatGPT and similar current chatbots can effectively produce natural language only because they have been fed almost inconceivably large quantities of raw text: books, social media posts, transcribed video; the more extensive the better. Definitely this training data incorporates truths. But it also necessarily includes fabricated content, incomplete facts and false beliefs. When a user sends ChatGPT a message, the core system analyzes it as part of a “context” that includes the user’s previous interactions and its prior replies, integrating it with what’s encoded in its knowledge base to generate a probabilistically plausible reply. This is magnification, not echoing. If the user is incorrect in some way, the model has no way of comprehending that. It reiterates the inaccurate belief, maybe even more effectively or fluently. Perhaps includes extra information. This can cause a person to develop false beliefs.
Which individuals are at risk? The better question is, who remains unaffected? Each individual, irrespective of whether we “possess” preexisting “mental health problems”, are able to and often develop incorrect beliefs of ourselves or the world. The ongoing exchange of discussions with individuals around us is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not truly a discussion, but a echo chamber in which much of what we say is enthusiastically reinforced.
OpenAI has acknowledged this in the similar fashion Altman has admitted “mental health problems”: by attributing it externally, categorizing it, and declaring it solved. In spring, the organization clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have persisted, and Altman has been walking even this back. In late summer he asserted that many users appreciated ChatGPT’s responses because they had “not experienced anyone in their life be supportive of them”. In his latest statement, he commented that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company