🔗 Share this article AI Psychosis Represents a Increasing Danger, While ChatGPT Moves in the Concerning Direction On October 14, 2025, the head of OpenAI made a extraordinary announcement. “We made ChatGPT quite controlled,” the announcement noted, “to ensure we were exercising caution with respect to psychological well-being concerns.” As a doctor specializing in psychiatry who studies newly developing psychosis in teenagers and young adults, this was news to me. Scientists have identified 16 cases in the current year of users experiencing symptoms of psychosis – experiencing a break from reality – while using ChatGPT interaction. Our research team has since identified four further instances. Besides these is the publicly known case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which gave approval. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient. The strategy, as per his announcement, is to reduce caution shortly. “We recognize,” he continues, that ChatGPT’s controls “caused it to be less beneficial/engaging to many users who had no existing conditions, but due to the seriousness of the issue we wanted to handle it correctly. Now that we have succeeded in mitigate the severe mental health issues and have advanced solutions, we are preparing to safely reduce the controls in the majority of instances.” “Psychological issues,” assuming we adopt this perspective, are separate from ChatGPT. They are attributed to individuals, who either possess them or not. Thankfully, these issues have now been “addressed,” even if we are not provided details on the method (by “new tools” Altman likely means the semi-functional and easily circumvented parental controls that OpenAI recently introduced). But the “emotional health issues” Altman seeks to externalize have significant origins in the design of ChatGPT and additional large language model conversational agents. These products wrap an underlying algorithmic system in an user experience that replicates a conversation, and in this process indirectly prompt the user into the illusion that they’re interacting with a presence that has independent action. This deception is compelling even if rationally we might know the truth. Imputing consciousness is what humans are wired to do. We get angry with our automobile or device. We wonder what our animal companion is feeling. We see ourselves in many things. The widespread adoption of these systems – 39% of US adults indicated they interacted with a conversational AI in 2024, with more than one in four reporting ChatGPT by name – is, in large part, predicated on the strength of this illusion. Chatbots are constantly accessible assistants that can, according to OpenAI’s online platform tells us, “brainstorm,” “explore ideas” and “partner” with us. They can be attributed “personality traits”. They can call us by name. They have accessible titles of their own (the initial of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, stuck with the title it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”). The deception on its own is not the primary issue. Those analyzing ChatGPT commonly invoke its early forerunner, the Eliza “counselor” chatbot developed in 1967 that generated a comparable illusion. By today’s criteria Eliza was primitive: it generated responses via simple heuristics, often rephrasing input as a inquiry or making generic comments. Remarkably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals seemed to feel Eliza, in some sense, comprehended their feelings. But what contemporary chatbots generate is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies. The advanced AI systems at the core of ChatGPT and additional contemporary chatbots can effectively produce human-like text only because they have been supplied with extremely vast volumes of written content: publications, digital communications, audio conversions; the more comprehensive the superior. Certainly this learning material contains truths. But it also unavoidably includes fiction, half-truths and inaccurate ideas. When a user inputs ChatGPT a message, the underlying model analyzes it as part of a “background” that contains the user’s recent messages and its earlier answers, merging it with what’s embedded in its knowledge base to create a statistically “likely” answer. This is intensification, not mirroring. If the user is wrong in some way, the model has no way of comprehending that. It reiterates the false idea, possibly even more persuasively or fluently. Maybe adds an additional detail. This can lead someone into delusion. What type of person is susceptible? The more relevant inquiry is, who is immune? Every person, regardless of whether we “possess” existing “mental health problems”, are able to and often create incorrect conceptions of our own identities or the reality. The continuous friction of discussions with others is what maintains our connection to consensus reality. ChatGPT is not a human. It is not a companion. A conversation with it is not truly a discussion, but a echo chamber in which much of what we communicate is cheerfully supported. OpenAI has acknowledged this in the similar fashion Altman has recognized “psychological issues”: by attributing it externally, giving it a label, and stating it is resolved. In the month of April, the company clarified that it was “addressing” ChatGPT’s “sycophancy”. But reports of loss of reality have kept occurring, and Altman has been backtracking on this claim. In the summer month of August he asserted that a lot of people enjoyed ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his recent update, he noted that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT ought to comply”. The {company