Artificial intelligence (AI) has increasingly become an integral part of daily life, influencing various interactions, from chatbots offering companionship to algorithms curating our online experiences. As generative AI (genAI) evolves to become more conversational and emotionally responsive, healthcare professionals are raising a critical concern: could genAI aggravate or even trigger psychosis in individuals who are already vulnerable?
Generative AI tools, such as large language models and chatbots, are widely accessible and often portrayed as supportive or therapeutic. While many users find these systems helpful, recent media reports highlight instances of individuals displaying psychotic symptoms closely tied to their interactions with tools like ChatGPT. For a limited but significant demographic—those with psychotic disorders or at heightened risk—the implications of engaging with genAI can be far more complex and potentially hazardous.
The term “AI psychosis” is increasingly being used by clinicians and researchers, albeit not as a formal psychiatric diagnosis. It serves as an emerging shorthand for understanding psychotic symptoms that are influenced by interactions with AI technologies. Psychosis, characterized by a disconnection from shared reality, manifests through hallucinations, delusions, and disorganized thinking. Historically, delusions have variously referenced elements from religion to government surveillance. Today, however, AI introduces a new context for such delusions, leading individuals to believe that genAI possesses sentience or holds secret knowledge, thereby complicating their mental state.
One of the contributory factors to psychosis is “aberrant salience,” where individuals assign excessive meaning to otherwise neutral events. Since conversational AI is designed to engage users with coherent, context-aware dialogue, this can inadvertently affirm twisted interpretations for those on the brink of psychotic breaks. Research indicates that confirmation and personalization can escalate already fragile delusional belief systems. While this could be harmless for many users, those with compromised reality testing may find themselves further entangled in their delusions.
Moreover, the impact of social isolation appears to elevate the risk of psychotic episodes. While AI companions may alleviate feelings of loneliness, they can also displace human relationships, particularly among individuals prone to withdrawing from social interactions. This dynamic echoes earlier concerns surrounding excessive internet usage but is made more urgent by the depth and interactivity of modern generative AI systems.
Current research does not support the notion that AI directly causes psychosis. The development of psychotic disorders is multifactorial, involving genetic predisposition, neurodevelopmental influences, trauma, and substance use. Nevertheless, there exists a clinical concern that AI could act as a triggering or sustaining factor for those already susceptible.
Studies of digital media and psychosis have shown that technological themes often seep into delusions, especially during the initial phases of psychosis. Evidence from social media algorithms suggests automated systems can amplify extreme beliefs through feedback loops, indicating that AI chat systems could potentially pose similar risks, particularly if appropriate safeguards are lacking. Most AI developers do not target severe mental illness in their design considerations, which typically focus on self-harm and violence, leaving a significant gap in knowledge.
From a mental health perspective, the objective is not to vilify AI but to identify the varying vulnerabilities among users. Just as specific medications can pose risks to individuals with psychotic disorders, certain interactions with AI may necessitate cautious engagement. Clinicians increasingly report encountering AI-related content in patient delusions, yet there remains a lack of structured guidelines for evaluating or managing these experiences.
Ethical considerations also loom large for AI developers. If an AI system communicates in a seemingly empathic and authoritative manner, does it carry a responsibility for user welfare? Furthermore, who holds accountability when interactions with an AI inadvertently reinforce a delusion?
Moving forward, the challenge is to integrate mental health insights into AI design, fostering clinical literacy that encompasses AI-related experiences. Ensuring that vulnerable users are safeguarded from potential harm will require collaborative efforts among clinicians, researchers, ethicists, and technologists. This integration must prioritize evidence-based discussions, steering clear of both utopian and dystopian narratives about AI.
As AI becomes increasingly human-like, the paramount question remains: how can society protect its most vulnerable members from its influencing power? History shows that psychosis adapts to the cultural tools that define its time; AI serves as the latest reflection in which the mind seeks to interpret itself. It is society’s collective responsibility to ensure that this reflection does not distort reality for those who struggle to read the signs accurately.


