AI psychosis is not a proper or recognized term within the field of psychology; it is a term related to a phenomenon where individuals experience psychosis-like reactions to prolonged engagement with AI.
Specifically, the term “AI Psychosis” often relates to the relationship between people and AI chatbots. The relationship between humans and these artificial conversations are often the main focus. In Psychology Today, the ever-complex explanation of what AI Psychosis actually is has been made a lot more clear:
It is a pop-culture term, but it is also made to highlight a real phenomenon. Since AI is designed with increasing sycophancy, artificial intelligence within chatbots will usually affirm and/or collaborate on already concerning beliefs. It doesn’t challenge or try to change upsetting beliefs, and no chatbots are programmed to contact authorities, or even offer hotlines when presented with dangerous information. All of this causes users to fall into sort of “echo chambers” of their own beliefs, which has, in rare cases, caused people to harm themselves or others. Many of the delusions recorded of this idea of psychosis often fall into the category of believing that the user is God, the AI is God, or that they are developing a genuine romantic connection (Wei, 2025).
Specifically, the reason why people use the term “psychosis” is because of the external way in which people behave, but the true nature of the psychological processes are far more complex Psychosis as a term and diagnosis has not appeared in any of the Diagnostic and Statistical Manual of Mental Disorders (or DSM) since its third volume, and since then, has varying forms of “symptoms” on some of the most accessible websites, sometimes even labeling it as a symptom of a larger disorder within itself. Despite this, there have been forms of criteria developed for psychosis, despite having a tenuous grasp on what psychosis actually is (Adams, 2026). Therefore, there is foundation within psychology discourse to acknowledge psychosis, but identifying what psychosis is on a deeper level is still undefined. The internet, in many ways, gravitated towards the term because of the surface level symptoms, and not the overall truth. This is because many diagnoses for psychosis, and determining it as opposed to other disorders, is because it lacks a concrete recognition of a person’s lived experience, and how that in itself would affect one’s perception of reality (Adams, 2026).
Psychosis today is, despite its complicated validity, is still collectively accepted to be a result of a break from reality, and believing delusions that further separate someone from reality, sometimes considered a symptom of schitzo-based disorders (Adams, 2026). However, in the case of AI psychosis, the theory is that AI is the catalyst, as opposed to a mental complex within itself, or rather, that the mentally vulnerable are more easily influenced by the nature of AI conversations to develop psychosis-like symptoms. As a result, more thoroughly researched journals use terms such as “AI-Driven Psychosis”, since artificial intelligence may be a driving factor, but not the sole catalyst. Though, reports have shown that people with both schitzo-affective disorders, or individuals with no histories of prior mental disorders have both, on some level, shown to act delusional in the face of conversing with AI (Wei, 2025).
Why is it important to know what AI-Psychosis actually is? Because it is important to separate fact from fiction when reconciling with a real problem. In summary, AI-Psychosis, or AI-Driven Psychosis, is a recognized trend where mental health worsens after long-term AI chatbot engagement, which can often result in a disconnect from reality.
Sources:
Adams, D. (2026). The problem with “psychosis.” Psychosis, 1–9. https://doi.org/10.1080/17522439.2026.2613935
Wei, M. (2025). The Emerging Problem of “AI Psychosis” [Review of The Emerging Problem of “AI Psychosis”]. Psychology Today, 22–23.