Bluesky Facebook Reddit Email

Mental health professionals urged to do their own evaluations of AI-based tools

12.08.25 | Wolters Kluwer Health

Apple iPhone 17 Pro

Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.

December 8, 2025 Millions of people already chat about their mental health with large language models (LLMs), the conversational form of artificial intelligence . Some providers have integrated LLM-based mental healthcare tools into routine workflows. John Torous, MD, MBI and colleagues, of the Division of Digital Psychiatry at Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, urge clinicians to take immediate action to ensure these tools are safe and helpful, not wait for ideal evaluation methodology to be developed. In the November issue of the Journal of Psychiatric Practice ®, part of the Lippincott portfolio from Wolters Kluwer , they present a real-world approach and explain the rationale.

LLMs are fundamentally different from traditional chatbots

"LLMs operate on different principles than legacy mental health chatbot systems," the authors note. Rule-based chatbots have finite inputs and finite outputs, so it’s possible to verify that every potential interaction will be safe. Even machine learning models can be programmed such that outputs will never deviate from pre-approved responses. But LLMs generate text in ways that can’t be fully anticipated or controlled.

LLMs present three interconnected evaluation challenges

Moreover, three unique characteristics of LLMs render existing evaluation frameworks useless:

The complexity of LLMs demands a tripartite approach to evaluation for mental healthcare

Dr. Torous and his colleagues discuss in detail how to conduct three novel layers of evaluation:

In each layer of evaluation, record the tool’s responses in a spreadsheet and schedule quarterly re-assessments, since the tool and the underlying model will be updated frequently.

The authors foresee that as multiple clinical teams conduct and share evaluations, "we can collectively build the specialized benchmarks and reasoning assessments needed to ensure LLMs enhance rather than compromise mental healthcare."

Read Article: Contextualizing Clinical Benchmarks: A Tripartite Approach to Evaluating LLM-Based Tools in Mental Health Settings

Wolters Kluwer provides trusted clinical technology and evidence-based solutions that engage clinicians, patients, researchers and students in effective decision-making and outcomes across healthcare. We support clinical effectiveness, learning and research, clinical surveillance and compliance, as well as data solutions. For more information about our solutions, visit https://www.wolterskluwer.com/en/health .

###

About Wolters Kluwer

Wolters Kluwer (EURONEXT: WKL) is a global leader in information, software solutions and services for professionals in healthcare; tax and accounting; financial and corporate compliance; legal and regulatory; corporate performance and ESG. We help our customers make critical decisions every day by providing expert solutions that combine deep domain knowledge with technology and services.

Wolters Kluwer reported 2024 annual revenues of €5.9 billion. The group serves customers in over 180 countries, maintains operations in over 40 countries, and employs approximately 21,600 people worldwide. The company is headquartered in Alphen aan den Rijn, the Netherlands. For more information, visit www.wolterskluwer.com , follow us on LinkedIn , Facebook , YouTube and Instagram .

Journal of Psychiatric Practice

Contextualizing Clinical Benchmarks: A Tripartite Approach to Evaluating LLM-Based Tools in Mental Health Settings

8-Dec-2025

Keywords

Article Information

Contact Information

Josh DeStefano
Wolters Kluwer Health
joshua.destefano@wolterskluwer.com

How to Cite This Article

APA:
Wolters Kluwer Health. (2025, December 8). Mental health professionals urged to do their own evaluations of AI-based tools. Brightsurf News. https://www.brightsurf.com/news/12DRVPY1/mental-health-professionals-urged-to-do-their-own-evaluations-of-ai-based-tools.html
MLA:
"Mental health professionals urged to do their own evaluations of AI-based tools." Brightsurf News, Dec. 8 2025, https://www.brightsurf.com/news/12DRVPY1/mental-health-professionals-urged-to-do-their-own-evaluations-of-ai-based-tools.html.