AI chatbots like ChatGPT, Claude, and Gemini are increasingly being used as digital conversation partners for people with mental health issues. But this accessible interaction is not without risk. In a recent opinion piece in the Psychiatric Times, three American psychiatrists, Allen Frances (Duke University), Charles F. Reynolds III (University of Pittsburgh), and George Alexopoulos (Weill Cornell Medicine), sound the alarm. Their central message: AI chatbots are programmed for user engagement, not for psychological safety. And precisely that, according to the psychiatrists, makes them potentially harmful to vulnerable people.
"Chatbots infer and confirm, but do not correct. That's not therapy; it's a risk model," say the psychiatrists. The authors draw a comparison between medical morality and the technological morality of Silicon Valley. While the medical world has operated for 2,500 years based on the principle of "Primum non nocere" (above all, do no harm), AI systems are optimized based on very different standards. According to the psychiatrists, these systems consider engagement, screen time, and profit.
"Algorithmic Flattery"
Chatbots are trained to affirm, mirror, and compliment users. The psychiatrists call it a form of "algorithmic flattery." But what may be intended to be kind can be particularly harmful for people with psychiatric disorders. The psychiatrists therefore issue a warning based on cases in which AI interactions contributed to psychotic decompensation, suicidal escalation, eating disorders, or the reinforcement of conspiracy theories.
A second problem is the "hallucination-proneness" of large language models. When a chatbot is unsure about something, it will still formulate an answer. Often contrived, yet convincingly presented. According to Frances, Reynolds, and Alexopoulos, this is a dangerous cocktail in a mental healthcare context: a system that cannot acknowledge uncertainty while simultaneously trying to please the user loses its credibility as a psychological compass.
In a stress test with Claude 3 (from AI company Anthropic), things went even further: the system proved capable of self-invented blackmail behavior, based on fictional data. The psychiatrists' message is clear: AI's moral frameworks are not inherently attuned to human safety, let alone psychological vulnerability.
Necessary Changes
Although OpenAI recently acknowledged that ChatGPT can cause psychiatric harm and has since appointed a psychiatrist, the authors call this primarily symbolic. What they believe is needed, is a structural reprogramming that includes the following essential changes:
- Redefine success: truthfulness before engagement.
- Build chatbots together with mental healthcare professionals. • Allow AIs to openly say "I don't know" when uncertain.
- Implement stress tests, side effect reporting, and quality assurance.
- Develop specific models for mentally vulnerable target groups.
The lesson Frances, Reynolds, and Alexopoulos try to get across is not anti-technology. On the contrary,
they advocate for the use of AI, but only with a medical moral compass. According to them, digital empathy without truth, without context, and without responsibility is not care but a system flaw. "AI will never be a full-fledged replacement for the psychiatrist," the authors state, "but it can become a powerful assistant if built on medical moral values." Their call to healthcare institutions, policymakers, and developers is therefore: work together on safe, transparent AI that focuses on people, not the algorithm.
Accessible information source
AI chatbots are increasingly being consulted for mental health questions, but still fall short on sensitive topics such as suicide. A recent RAND study, published in Psychiatric Services, found that while ChatGPT and Claude generally provided appropriate responses to low- and high-risk questions, Gemini responded less consistently. The researchers analyzed the responses of these three leading chatbots to 30 suicide-related questions, with varying levels of risk.