New open-source framework measures suicide risk in AI conversations around mental health.
NEW YORK, Feb. 11, 2026 /PRNewswire/ — Spring Health, a global mental health company, today announced a major advancement in its work to improve AI safety in mental health care with the application of VERA-MH (Validation of Ethical and Responsible AI in Mental Health), an open-source, clinically grounded framework designed to evaluate how AI chatbots behave in high-risk mental health conversations.
As conversational AI tools become increasingly embedded in how people seek emotional and mental health support, including during moments of crisis, concerns have grown among clinicians, researchers, and policymakers about the safety of these systems when they encounter suicide risk and acute psychological distress. Without shared standards, developers and decision-makers lack a consistent way to assess whether AI systems respond appropriately when the stakes are highest.
“AI is already present in some of the most vulnerable moments in people’s lives,” said Adam Chekroud, President and co-founder at Spring Health. “In mental health care, clinicians are held to clear standards designed to prevent harm. As AI increasingly operates in these same moments, it must be held to similarly rigorous expectations.”
From Concept to Application
In October 2025, Spring Health partnered with leaders across healthcare, technology, and ethics to launch VERA-MH, the first open-source, clinically grounded standard for evaluating the safety and effectiveness of AI chatbots in mental health care. Since opening the framework to the field, dozens of academic, clinical, and technology experts have joined the effort to help shape a common foundation for responsible innovation.
Today’s announcement marks the next step in that work. VERA-MH has now been applied to assess commercially available AI models, beginning with conversations that may involve suicide risk. Suicide risk was selected as the first focus area because it represents the moment where getting safety wrong carries the greatest potential for harm.
Major commercially available AI models showed meaningful differences in how they evaluate potential risk, respond supportively, guide users to human care, and maintain appropriate boundaries. These findings are not intended to indict individual systems but instead invite universal industry participation. Without widespread adoption of shared benchmarks, the field lacks a common definition of what “safe enough” means in practice.
Shared Infrastructure for a Growing Ecosystem
VERA-MH provides a structured way to define and evaluate safety expectations in high-risk mental health conversations, including recognizing suicide risk, responding appropriately, and escalating to human support when needed.
“Shared safety standards are not a constraint on innovation,” said Dr. Millard Brown, Chief Medical Officer at Spring Health. “They keep people safe. Standards help developers innovate responsibly, enable employers and health plans to make informed decisions, and most importantly, help protect those in crisis. We all deserve confidence that if a loved one turns to an AI tool for support, it has been rigorously tested and proven safe.”
An Invitation to the Field
VERA-MH is designed to evolve over time as AI capabilities, use cases, and risks change. Suicide risk is the first milestone in a broader, multi-year effort to expand shared safety standards across additional mental health risk areas.
Spring Health is calling on the industry to engage with VERA-MH and contribute to the development of future safety standards. Developers and researchers can immediately apply the open-source codebase to evaluate solutions against the benchmark. Employers, benefits consultants, and policymakers can advance responsible adoption by requiring the standard in AI implementation decisions.
“Our goal is to raise the bar together,” said Dr. Millard Brown, Chief Medical Officer at Spring Health. “So that wherever AI shows up in someone’s hardest moments, the field can stand behind what ‘safe’ means. When safety fails in moments of suicide risk, lives are at stake.”
For more information about VERA-MH and to access the open-source framework, visit https://www.springhealth.com/blog/vera-mh-for-suicide-risk
About VERA-MH
VERA-MH (Validation of Ethical and Responsible AI in Mental Health) is an open-source evaluation framework that provides a transparent, clinically grounded way to assess whether AI systems meet essential safety requirements in high-risk mental health conversations. Built in collaboration with clinicians, researchers, and the AI in Mental Health Safety & Ethics Council, VERA-MH is designed to make AI safety measurable, comparable, and accountable.
For future updates and ongoing work, follow VERA-MH on LinkedIn.
Media Contact:
press@springhealth.com
SOURCE Spring Health
THE INFORMATION CONTAINED WITHIN THIS ANNOUNCEMENT IS DEEMED BY THE COMPANY TO CONSTITUTE INSIDE INFORMATION…
Dissolution-Instrument-Based Strategies to De-Risk and Accelerate Pharmaceutical DevelopmentWEST LAFAYETTE, Ind., Feb. 11, 2026 /PRNewswire/ --…
Management to host conference call at 8:00 AM Eastern time on Thursday, February 12, 2026Toronto,…
February 11, 2026 – Wilmington, Massachusetts. The North Atlantic States Carpenters Benefit Funds ("NASCBF") is…
LAGUNA NIGUEL, Calif., Feb. 11, 2026 /PRNewswire/ -- Alleva, a leading behavioral health technology platform,…
New digital-first media platform delivers original reporting, executive insights, and measurable marketing solutions for behavioral…