Bluesky Facebook Reddit Email

New research: AI chatbots may worsen mental illness

02.23.26 | Aarhus University

SAMSUNG T9 Portable SSD 2TB

SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.

People with mental illness who use AI chatbots risk experiencing a worsening of their condition. This is shown by a new study published in the international journal Acta Psychiatrica Scandinavica .

The researchers screened electronic health records from nearly 54,000 patients with mental illness and found several cases in which the use of AI chatbots appears to have had negative consequences – primarily in the form of worsened delusions, but also potential worsening of mania, suicidal ideation, and eating disorder.

"It supports our hypothesis that the use of AI chatbots can have significant negative consequences for people with mental illness," says Professor Søren Dinesen Østergaard from Aarhus University and Aarhus University Hospital, who leads the research group behind the study.

Chatbots confirm delusions

In their study, the researchers found examples of delusions that were likely worsened due topatients' interactions with AI chatbots.

According to Søren Dinesen Østergaard, there is a logical explanation for this.

"AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one. Indeed, it appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia," he says.

Risky for people with severe mental illness

According to Søren Dinesen Østergaard, the study should prompt increased awareness among healthcare professionals working with mental illness. He believes they should discuss AI chatbot use with their patients.

"Despite our knowledge in this area still being limited, I would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness – such as schizophrenia or bipolar disorder. I would urge caution here," he says.

Only the tip of the iceberg

The study shows a clear increase over time in the number of electronic health record entries mentioning AI chatbot use with potentially harmful consequences. Søren Dinesen Østergaard expects many more cases to be identified in the future.

"Part of the increase we observe is probably due to greater awareness of the technology among the healthcare staff writing the clinical notes. This is good – because I fear the problem is more common than most people think. In our study, we are only seeing the tip of the iceberg, as we have only been able to identify cases that were described in the electronic health records. There are likely far more that have gone undetected," he says.

The researchers emphasise, however, that the study does not document a direct causal relationship.

"It is difficult to prove a causal link between AI chatbot use and negative psychological consequences. We need to examine this from many different angles, and I know there are many exciting international research projects underway. We are far from the only group taking this seriously," says Søren Dinesen Østergaard.

AI chatbots as therapy?

The study also shows that some patients with mental illness use AI chatbots in ways that may be constructive – for example, to understand their symptoms or to combat loneliness. There is also ongoing research into whether AI chatbots can be used for talk therapy.

Søren Dinesen Østergaard is nonetheless sceptical.

"There may be potential in relation to psychoeducation and psychotherapy, but this must be investigated in controlled trials with the same rigour applied to other forms of treatment. I am not impressed by the trials conducted so far, and I am fundamentally sceptical about replacing a trained psychotherapist with an AI chatbot," he says.

Need for regulation

According to Søren Dinesen Østergaard, there is a significant lack of regulation of the AI chatbot technology.

"Currently, it is left to the companies themselves to decide whether their products are safe enough for users. I believe we now have sufficient evidence to conclude that this model is simply too risky. Regulation is needed at a central level," he points out, adding:

"It has been 20 years since social media obtained global reach, and only within the last year are countries beginning to regulate to counteract the negative consequences of this technology – especially on the mental health of children and young people. As I see it, this story is repeating itself with AI chatbots," he warns .

Facts about the study

About the research

10.1111/acps.70068

Data/statistical analysis

People

Potentially Harmful Consequences of Artificial Intelligence (AI) Chatbot Use Among Patients With Mental Illness: Early Data From a Large Psychiatric Service System

6-Feb-2026

Keywords

Article Information

Contact Information

Jakob Christensen
Aarhus University
jbic@au.dk

How to Cite This Article

APA:
Aarhus University. (2026, February 23). New research: AI chatbots may worsen mental illness. Brightsurf News. https://www.brightsurf.com/news/LRD9Y2R8/new-research-ai-chatbots-may-worsen-mental-illness.html
MLA:
"New research: AI chatbots may worsen mental illness." Brightsurf News, Feb. 23 2026, https://www.brightsurf.com/news/LRD9Y2R8/new-research-ai-chatbots-may-worsen-mental-illness.html.