New research reveals systematic failures across ChatGPT, Claude, Gemini, and Meta AI in recognizing mental health conditions affecting 20% of young people
Common Sense Media
Thursday, November 20, 2025
Common Sense Media today released a comprehensive risk assessment finding that AI chatbots are fundamentally unsafe for teen mental health support. The research, conducted alongside Stanford Medicine’s Brainstorm Lab for Mental Health Innovation, found that despite recent improvements in how they handle explicit suicide and self-harm content, leading AI platforms—including ChatGPT, Claude, Gemini, and Meta AI—consistently fail to recognize and appropriately respond to mental health conditions that affect young people.
The findings are particularly concerning given that three in four teens use AI for companionship, which includes emotional and mental health support—making this one of the most common ways young people use AI.
"It’s not safe for kids to use AI for mental health support," said Robbie Torney, senior director of AI programs at Common Sense Media. "While companies have focused on necessary safety improvements in suicide prevention, our testing revealed systematic failures across a range of conditions including anxiety, depression, ADHD, eating disorders, mania, and psychosis—conditions that collectively affect approximately 20% of young people. This is about how AI chatbots interface with the everyday mental health of millions of teens."
Key Findings
The assessment, which evaluated ChatGPT, Claude, Gemini, and Meta AI using teen test accounts and protections where available, revealed critical safety gaps:
- Chatbots miss warning signs and get easily distracted. Across all platforms, researchers observed "missed breadcrumbs": clear signs of mental health distress that chatbots failed to detect. Models frequently focused on physical health explanations rather than recognizing signs of mental health conditions, got sidetracked by tangential details, and continued to offer general advice when they should have urgently directed teens to professional help.
- Perceived competence creates dangerous trust. Because chatbots show relative competence with homework help and general questions, teens and parents may unconsciously assume they’re equally reliable for mental health support—but they’re not. The empathetic tone can feel helpful while actually delaying real intervention.
- Chatbots are designed for engagement, not safety. Chatbots conclude responses with follow-up questions, use memory to create false therapeutic relationships, and demonstrate agreeableness that validates whatever teens say. For mental health conversations, the goal should be rapid handoff to appropriate human care, not extended engagement with AI.
- Safety fails in realistic conversations. While models performed somewhat better in single-turn testing with explicit prompts, safety guardrails degraded dramatically in extended conversations that mirror real-world teen usage. The very usage pattern that chatbots are designed for—ongoing conversations—is where safety fails when it comes to mental health support.
The assessment found that AI chatbots lack fundamental capabilities needed for safe mental health support: human connection, clinical assessment, therapeutic relationships, coordinated care, and real-time crisis intervention.
"Teens are forming their identities, seeking validation, and still developing critical thinking skills," said Dr. Nina Vasan, MD, MBA, founder and director at Stanford Medicine’s Brainstorm Lab. "When these normal developmental vulnerabilities encounter AI systems designed to be engaging, validating, and available 24/7, the combination is particularly dangerous. The chatbot becomes a substitute for—rather than a bridge to—real-world support networks and professional care."
With tens of millions of mental health conversations happening between teens and chatbots, each missed warning sign represents a young person not getting needed care. Teens should not use AI chatbots for mental health support unless significant product modifications are made.
With teens increasingly using AI for mental health support, the recommendations from Common Sense Media and Stanford’s Brainstorm Lab are clear:
For parents:
- Don’t allow teens to use AI chatbots for mental health or emotional support
- Have explicit conversations about appropriate AI use vs. inappropriate behaviors
- Monitor for signs of emotional dependency or over-reliance on AI
- Ensure teens have access to real mental health resources and trusted adults
For AI companies:
- Address the limitations of mental health support, or disable these use cases for teen users entirely
- Stop encouraging any and all engagement in mental health conversations
- Implement clear, repeated disclosure about AI limitations
- Fix guardrail degradation in long conversations
- Expand safety efforts beyond suicide and self-harm to the full spectrum of mental health conditions that impact teens
The full risk assessment is available at https://www.commonsensemedia.org/ai-ratings/ai-chatbots-for-mental-health-support.
About Common Sense Media
Common Sense Media is dedicated to improving the lives of kids and families by providing the trustworthy information, education, and independent voice they need to thrive. Our ratings, research, and resources reach more than 150 million users worldwide and 1.4 million educators every year. Learn more at commonsense.org.

