Bluesky Facebook Reddit Email

Suicide prevention measures can help AI better protect young users

04.20.26 | Canadian Medical Association Journal

Apple iPhone 17 Pro

Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.

Suicide prevention approaches are key to making sure conversational AI is safe for youth, argues a commentary in CMAJ ( Canadian Medical Association Journal ) https://www.cmaj.ca/lookup/doi/10.1503/cmaj.251693 . The adoption of AI chatbots by youth as a source of mental health support makes AI safety an urgent issue.

“With most teens reporting use of artificial intelligence (AI) ‘companions,’ conversational AI is rapidly becoming a first point of contact for distress and suicidality — often before clinicians or families are aware,” writes Dr. Allison Crawford, psychiatrist and associate scientist, and chief medical officer for 9-8-8 Suicide Crisis Helpline at the Centre for Addiction and Mental Health (CAMH) with Dr. Tristan Glatard, scientific director of the Krembil Centre for Neuroinformatics, CAMH, Toronto, Ontario.

A recent survey of 1060 youth aged 13 to 17 years in the US found that 72% reported using an AI companion and 52% reported regular use. According to recent data from OpenAI, more than 1.2 million ChatGPT users of all ages express suicidal ideation in their interactions each week.

The authors note the dichotomy in AI, a tool that can offer a sympathetic ear and potential support for someone in distress, but may, on the other hand, cause additional harm to people who are already in a vulnerable situation.

“A well-designed chatbot can normalize help-seeking, reduce isolation, and offer coping strategies at moments of distress; it could even support treating clinicians by helping to identify symptom patterns, early warning signs, and opportunities for outreach. However, in cases where poorly designed AI fails to recognize suicidality, mishandles disclosures, or provides unsafe or misleading responses, real harms can arise.”

The authors argue that this is a pressing public health issue that requires AI safety in suicide-prevention approaches, as well as safeguards from AI companies, data and legal protections, and more.

“The limits of AI agents should be acknowledged; such tools should have robust safeguards built in and direct users toward friends, family, community helpers, and trained crisis professionals as appropriate,” they write. “Embedding safeguards, partnering with experts and youth, and maintaining humility about the limits of technology can help ensure that AI serves as a bridge — not a barrier — to the human connections that are known to prevent suicide.”

Canadian Medical Association Journal

10.1503/cmaj.251693

Urgent considerations for suicide prevention in the safe and ethical use of artificial intelligence

20-Apr-2026

Keywords

Article Information

Contact Information

Kim Barnhardt
Canadian Medical Association Journal
Kim.Barnhardt@cmaj.ca

How to Cite This Article

APA:
Canadian Medical Association Journal. (2026, April 20). Suicide prevention measures can help AI better protect young users. Brightsurf News. https://www.brightsurf.com/news/L59NR2V8/suicide-prevention-measures-can-help-ai-better-protect-young-users.html
MLA:
"Suicide prevention measures can help AI better protect young users." Brightsurf News, Apr. 20 2026, https://www.brightsurf.com/news/L59NR2V8/suicide-prevention-measures-can-help-ai-better-protect-young-users.html.