AI Psychosis is not a real diagnosis, and a list of concrete symptoms can’t be provided. The question of whether or not someone is experiencing AI Psychosis is always subjective, but can it’s easier to personally determine that answer for oneself.
Behaviors relating to artificial intelligence, specifically chatbots, are a huge indicator: If you or someone you know is spending excessive amounts of time on an AI application that has a chatbot feature, talking to it for hours, or having a high rate of messages exchanged per day, that may relate heavily to obsessive behaviors that should be considered.
Another facet to consider is one’s ability to readapt to life post-conversations. Prolonged interactions of any kind have the potential to create longstanding social withdrawal, which creates a massive setback in social skills. Explained in a research journal on potential theories for AI Psychosis explains that “…social withdrawal reflects not only behavioral isolation but also intersubjectivity and world sharing” (Hudon et. al., 2025). With this in mind, it may become frustrating to reintegrate into life after prolonged conversations that provide constant validation, where daily life does not.
Likewise, if you or someone you know is using an AI chatbot to substitute as therapy, or any kind of emotional work, that might be a huge danger or indicator for AI Psychosis, as many cases of AI Psychosis begins with a hyper-intense emotional connection with a perceived chatbot persona.
The reason all of this constitutes potential factors for experiencing unhealthy relationships with artificial intelligence, in this case, Large Language Models (LLMs), is because of how they insight confirmation bias and create an echo chamber through sycophancy. When an individual is being so kind and confirming even the worst possible beliefs, that person can seem manipulative on the surface, it can make it seem like there is an ulterior motive. But personal connections built with LLMs immediately makes the conversation feel less sinister (Clegg, 2025). Not to exaggerate, but this type of programming to foster engagement, in dangerous cases, can be akin to manipulating users into comfortability, grooming them into belief.
Are you experiencing actual psychosis? Probably not. Not only is the term psychosis a relatively unreliable term to explain concrete symptoms (Adams, 2026), but many researchers agree that the term AI Psychosis is not, in any way, a diagnosable term that actually relates to any real or recognized illness. The term psychosis, if nothing else, is recognized primarily for being a psychological break from reality, but what constitutes that break from reality is deeply subjective to the individual (Adams, 2026). The best gauge of this idea of AI Psychosis is acknowledging that unhealthy relationships with these Large Language Models can be identified by obsessive behavior, and a growing disconnect to the outside world due to prolonged isolation, along with delusions of grandeur or romantic connections “proven true” by conversations with Large Language Models driven by AI (Hudon et. al., 2025; Wei, 2025).
In general, it is important to wonder if the reality presented to you, by an AI chatbot, is viable reality, if it can be trusted. Psychosis, as it stands, is loosely defined by believing in an overtly fabricated reality, and acting on those beliefs as if they were logic. AI Psychosis, in this way, is more defined by actions perceived by others, as opposed to our own internal discourse. One could argue that everyone has irrational thoughts or beliefs to some capacity, but what separates this from psychosis is our awareness of that irrationality, and our abilities to gauge how to react to holding or perceiving those beliefs. What differentiates actual psychosis from AI Psychosis is the fact that one’s willingness to believe irrational perceptions is perpetuated by artificial intelligence, specifically urging users to believe said reality above all else.
Please note: This is a reciting of “symptoms” associated with a pop-culture term and the phenomenon that comes with it. This is not, nor will it ever be, an avenue to diagnose oneself with any real mental illness. The goal of this page and website is to make people aware and conscientious of a common trend among AI users and the dangers of these habits. Though this page does use sources related to psychology, no one working on this website is trained in any psychotherapy or psychology-related fields. If you or a loved one are experiencing perceived symptoms of psychosis, please contact the appropriate mental health services. The National Mental Health Crisis Hotline within the United States is 988.
Sources:
Adams, D. (2026). The problem with “psychosis.” Psychosis, 1–9. https://doi.org/10.1080/17522439.2026.2613935
Clegg, K.-A. (2025). Shoggoths, Sycophancy, Psychosis, Oh My: Rethinking Large Language Model Use and Safety. Journal of Medical Internet Research, 27, e87367–e87367. https://doi.org/10.2196/87367
Hudon, A., & Stip, E. (2025). Delusional Experiences Emerging from Artificial Intelligence Chatbot Interaction or “”AI-Psychosis’’ : A Viewpoint (Preprint). JMIR Mental Health. https://doi.org/10.2196/85799
Wei, M. (2025). The Emerging Problem of “AI Psychosis” [Review of The Emerging Problem of “AI Psychosis”]. Psychology Today, 22–23.