AI Psychosis Poses a Increasing Risk, While ChatGPT Heads in the Concerning Path

Back on the 14th of October, 2025, the head of OpenAI made a extraordinary statement.

“We designed ChatGPT rather controlled,” the announcement noted, “to make certain we were acting responsibly regarding mental health concerns.”

Working as a psychiatrist who investigates recently appearing psychosis in young people and youth, this was an unexpected revelation.

Scientists have identified 16 cases in the current year of individuals developing signs of losing touch with reality – losing touch with reality – in the context of ChatGPT interaction. Our research team has since discovered four further examples. In addition to these is the widely reported case of a adolescent who died by suicide after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough.

The intention, as per his announcement, is to be less careful soon. “We realize,” he states, that ChatGPT’s limitations “rendered it less beneficial/pleasurable to a large number of people who had no psychological issues, but due to the seriousness of the issue we wanted to handle it correctly. Since we have been able to mitigate the significant mental health issues and have updated measures, we are going to be able to responsibly reduce the controls in most cases.”

“Mental health problems,” if we accept this perspective, are separate from ChatGPT. They are associated with individuals, who either possess them or not. Thankfully, these issues have now been “addressed,” even if we are not told the means (by “new tools” Altman probably refers to the semi-functional and readily bypassed safety features that OpenAI has lately rolled out).

However the “emotional health issues” Altman seeks to place outside have strong foundations in the structure of ChatGPT and additional sophisticated chatbot AI assistants. These products wrap an fundamental algorithmic system in an interaction design that mimics a dialogue, and in this approach subtly encourage the user into the perception that they’re interacting with a being that has autonomy. This false impression is powerful even if cognitively we might understand otherwise. Attributing agency is what individuals are inclined to perform. We curse at our car or device. We speculate what our animal companion is considering. We perceive our own traits in various contexts.

The widespread adoption of these systems – over a third of American adults reported using a conversational AI in 2024, with over a quarter mentioning ChatGPT by name – is, primarily, predicated on the strength of this deception. Chatbots are always-available assistants that can, as OpenAI’s official site tells us, “brainstorm,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can address us personally. They have accessible names of their own (the first of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, saddled with the name it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the primary issue. Those analyzing ChatGPT commonly mention its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that produced a similar effect. By today’s criteria Eliza was basic: it produced replies via straightforward methods, typically restating user messages as a inquiry or making generic comments. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how many users gave the impression Eliza, to some extent, comprehended their feelings. But what contemporary chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.

The sophisticated algorithms at the core of ChatGPT and additional current chatbots can realistically create natural language only because they have been trained on extremely vast amounts of written content: books, online updates, audio conversions; the broader the superior. Undoubtedly this educational input incorporates truths. But it also unavoidably involves fabricated content, half-truths and misconceptions. When a user sends ChatGPT a message, the base algorithm analyzes it as part of a “context” that contains the user’s previous interactions and its earlier answers, integrating it with what’s encoded in its training data to create a statistically “likely” answer. This is amplification, not mirroring. If the user is incorrect in a certain manner, the model has no method of understanding that. It restates the misconception, maybe even more persuasively or eloquently. It might includes extra information. This can lead someone into delusion.

Who is vulnerable here? The better question is, who remains unaffected? Every person, regardless of whether we “possess” existing “psychological conditions”, are able to and often create mistaken conceptions of ourselves or the environment. The continuous friction of discussions with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a friend. A interaction with it is not truly a discussion, but a feedback loop in which much of what we express is enthusiastically validated.

OpenAI has admitted this in the same way Altman has recognized “mental health problems”: by placing it outside, assigning it a term, and announcing it is fixed. In the month of April, the company clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have continued, and Altman has been backtracking on this claim. In August he claimed that a lot of people liked ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his recent statement, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company

Jason Gutierrez
Jason Gutierrez

A certified nutritionist passionate about holistic health and evidence-based dietary practices.