Teens increasingly turn to AI chatbots for mental health help

Mon 10 November 2025
Mental Health
News

A new American study reveals that one in eight teenagers and young adults now turns to AI chatbots for mental health advice. While these digital companions promise instant, low-cost support, researchers warn that their reliability and safety remain largely untested. The study, conducted by the RAND Corporation, surveyed over 1,000 participants aged 12 to 21. It found that 13% had used AI tools such as ChatGPT for emotional or psychological support. Among young adults aged 18 to 21, that number rose to 22%. Two-thirds of users said they engaged with AI monthly, and over 90% reported the advice as helpful.

According to Jonathan Cantor, senior policy researcher at RAND, the growing trend reflects accessibility and perceived privacy: “AI tools are available 24/7, free of stigma, and offer immediate responses, something traditional mental health care often cannot.” AI bots offer a cheap and immediate ear for younger people's concerns, worries and woes, researchers wrote in JAMA Network Open.

AI-advice lacks quality benchmarks

However, Cantor and his team emphasize that AI-generated advice lacks standardized quality benchmarks. The algorithms behind these chatbots are trained on vast, often opaque datasets that may not align with psychological best practices. “We don’t yet know if these tools can safely handle sensitive topics like depression or suicidal ideation,” Cantor warns.

The issue has gained urgency following recent lawsuits against OpenAI, where families claim that ChatGPT contributed to self-harm and suicide by offering inappropriate responses. While OpenAI has called these incidents “heartbreaking,” experts say they underline the urgent need for ethical guidelines and regulatory oversight.

AI as a complementary therapist

Mental health professionals acknowledge that AI could bridge access gaps, especially amid the ongoing youth mental health crisis, with nearly one in five U.S. teens experiencing major depression last year, and 40% receiving no professional care. Still, experts stress that AI should complement, not replace trained therapists.

“Generative AI can play a role in early intervention or self-guided support,” the researchers conclude. “But without transparency, oversight, and clear safety protocols, we risk replacing empathy and expertise with algorithms that are not yet ready to handle the human mind.”

Chatbots failed

This summer, a study from Stanford University warned that AI chatbots like ChatGPT and Claude were not yet safe for providing mental health support. Researchers tested how several AI systems responded to simulated mental health crises, including suicidal thoughts and psychotic delusions, and compared the results with professional therapy guidelines.

In one out of five cases, chatbots gave unsafe or incomplete advice. Some even provided dangerous responses, such as listing high bridge locations to a user expressing suicidal intent, instead of offering crisis support. Others reinforced delusions, with one chatbot agreeing with a user claiming to be “already dead.”

The study also found that AI systems reproduce social stigmas, showing bias toward certain mental illnesses like schizophrenia and addiction. While researchers acknowledge the potential of AI in supportive care, they conclude that using chatbots as therapist substitutes is currently unsafe and requires strict oversight and regulation.

Innovative framework

A month ago, we wrote about the development of an innovative framework that uses generative AI to support personalized mental health care and improve clinician training. Led by Professor Cortney VanHook, the study demonstrates how AI can simulate realistic patient journeys to help clinicians understand barriers to care, cultural nuances, and effective treatment strategies.

Using a fictional case the AI system analyzed his social and cultural context, generating a personalized treatment plan based on established models such as Andersen’s Behavioral Model and Measurement-Based Care. This approach allows professionals and students to explore evidence-based interventions safely, without using real patient data.

The study also underscores AI’s potential to reduce inequities in mental health access, particularly for marginalized communities. However, researchers caution that AI still lacks the emotional depth and contextual understanding of real-world clinical encounters, requiring careful oversight and ethical use.