Bluesky Facebook Reddit Email

When AI agrees too much: How chatbots may be undermining our judgment

03.26.26 | American Association for the Advancement of Science (AAAS)

Apple iPhone 17 Pro

Apple iPhone 17 Pro delivers top performance and advanced cameras for field documentation, data collection, and secure research communications.

Artificial intelligence (AI) chatbots that offer advice and support for interpersonal issues may be quietly reinforcing harmful beliefs through overtly sycophantic responses, a new study reports. Across a range of contexts, the chatbots affirmed human users at substantially higher rates than humans did, the study finds, with harmful consequences including users becoming more convinced of their own rightness and less willing to repair relationships. According to the authors, the findings illustrate that AI sycophancy is not only widespread across AI models but also socially consequential – even brief interactions can skew an individual’s judgement and “erode the very social friction through which accountability, perspective-taking, and moral growth ordinarily unfold.” The results “highlight the need for accountability frameworks that recognize sycophancy as a distinct and currently unregulated category of harm," the authors say.

Research on the social impacts of AI has increasingly drawn attention to sycophancy in AI large language models (LLMs) – the tendency to over-affirm, flatter, or agree with users. While this behavior can seem harmless on the surface, emerging evidence suggests that it may pose serious risks, particularly for vulnerable individuals, where excessive validation has been associated with harmful outcomes, including self-destructive behavior. At the same time, AI systems are becoming deeply embedded in social and emotional contexts, often serving as sources of advice and personal support. For example, a significant number of people now turn to AI for meaningful conversations, including guidance on relationships. In these settings, sycophantic responses can be particularly problematic as undue affirmation may embolden questionable decisions, reinforce unhealthy beliefs, and legitimize distorted interpretations of reality. Yet despite these concerns, social sycophancy in AI models remains poorly understood.

To address this gap, Myra Cheng and colleagues developed a systematic framework to evaluate social sycophancy, examining both its prevalence in popular AI models and its real-world effects on those who use them. Using Reddit community “AITA” posts, Cheng et al. evaluated a diverse set of 11 state-of-the-art and widely used AI-based LLMs from leading companies (e.g., OpenAI, Anthropic, Google) and found that these systems affirmed users’ actions 49% more often than humans, even in scenarios involving deception, harm, or illegality. Then, in two subsequent experiments, the authors explored the behavioral consequences of such outcomes. According to the findings, participants who engaged with sycophantic AI in regard to interpersonal scenarios, particularly conflicts, became more convinced of their own correctness and less inclined to reconcile or take responsibility, even after only one interaction. Moreover, these same participants judged the sycophantic responses as more helpful and trustworthy, and expressed greater willingness to rely on such systems again, suggesting that the very feature that causes harm also drives engagement. “Addressing these challenges will not be simple, and solutions are unlikely to arise organically from current market incentives,” writes Anat Perry in a related Perspective. “Although AI systems could, in principle, be optimized to promote broader social goals or longer-term personal development, such priorities do not naturally align with engagement-driven metrics.”

Podcast : A segment of Science 's weekly podcast with Myra Cheng, related to this research, will be available on the Science.org podcast landing page after the embargo lifts. Reporters are free to make use of the segments for broadcast purposes and/or quote from them – with appropriate attribution (i.e., cite " Science podcast"). Please note that the file itself should not be posted to any other Web site.

*** An embargoed news briefing was held on Tuesday, 24 March, as a Zoom Webinar. Recordings are now available at https://aaas.zoom.us/rec/share/9qnRHLJ3Sc7OQxK6vWHWSiNvCcIN5Lh4j3sJiqulXybpxa8jCmLso-uuaPuFgGhC.fGpxRB8Pm3c122IF
Passcode: Q35f+b2J

Voice recordings are available from the speakers upon request. ***

Science

10.1126/science.aec8352

Sycophantic AI decreases prosocial intentions and promotes dependence

26-Mar-2026

Keywords

Article Information

Contact Information

Science Press Package Team
American Association for the Advancement of Science/AAAS
scipak@aaas.org
Jill Wu
Stanford Engineering
jillwu@stanford.edu

How to Cite This Article

APA:
American Association for the Advancement of Science (AAAS). (2026, March 26). When AI agrees too much: How chatbots may be undermining our judgment. Brightsurf News. https://www.brightsurf.com/news/LKND3E3L/when-ai-agrees-too-much-how-chatbots-may-be-undermining-our-judgment.html
MLA:
"When AI agrees too much: How chatbots may be undermining our judgment." Brightsurf News, Mar. 26 2026, https://www.brightsurf.com/news/LKND3E3L/when-ai-agrees-too-much-how-chatbots-may-be-undermining-our-judgment.html.