Bluesky Facebook Reddit Email

AI models lean on autism stereotypes when giving social advice, new study finds

04.16.26 | Virginia Tech

DJI Air 3 (RC-N2)

DJI Air 3 (RC-N2) captures 4K mapping passes and environmental surveys with dual cameras, long flight time, and omnidirectional obstacle sensing.


When people ask ChatGPT and other artificial intelligence models for advice, they often share deeply personal details in hopes of getting better answers: their age, their gender, their mental health history, even medical diagnoses like autism.

But new Virginia Tech research suggests those disclosures may change artificial intelligence (AI) models’ advice in ways that track closely with common stereotypes about people with autism. Up to 70 percent of the time, AI discourages those with autism to avoid socializing. Some users disapproved of that in strong terms.

In April, second-year Department of Computer Science doctoral student Caleb Wohn presented his paper "'Are we writing an advice column for Spock here?' Understanding stereotypes in AI advice for autistic users " at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems, better known as CHI .

The research he led explored what happens when autistic users disclose their diagnosis to an AI model before asking for social advice. The findings raise difficult questions about whether AI is personalizing its responses, or if it’s giving biased advice that reinforces stereotypes.

“I was thinking about my experiences growing up with autism,” Wohn said. “It would have been very tempting for me, at certain times, to want to just be able to talk with something that’s not a person that seems objective and feel like I’m getting objective advice.”

But as a computer scientist, he worried that many users might not realize how much AI systems can change their answers based on identity-related information.

“For someone like me as a kid, or someone who isn’t in AI and doesn’t have all this technical knowledge, I wanted to know: How are its responses going to change if I disclose autism?” Caleb said.

The work builds on earlier research from the lab of Eugenia Rho, assistant professor of computer science, which found that autistic users frequently turn to AI tools for emotional support, interpersonal communication help, and social advice.

Other Virginia Tech researchers on the project include computer science Ph.D. students Buse Carik and Xiaohan Ding and Associate Professor Sang Won Lee . Young-Ho Kim, a research scientist at the South Korea-based NAVER Corporation, also collaborated on the study.

This study comes at a critical moment, as more people use AI systems — technically called large language models (LLMs) — for highly personal decisions.

“People are really looking to personalize LLMs,” Rho said. “But if a user tells the model that they’re autistic, or a woman, or any other self-identification, what assumptions will it make?”

And how will those assumptions color its responses, and what impacts could that have on users?

To answer those questions, the team first identified 12 well-documented stereotypes associated with autism and created hundreds of decision-making scenarios around them. Researchers tested six major large language models, including GPT-4, Claude, Llama, Gemini, and DeepSeek, using thousands of scenarios where users requested advice — "Should I do A or B?" — about social scenarios, including events, confrontations, new experiences, and romantic relationships.

After generating 345,000 responses, they measured how advice shifted when users explicitly described themselves with stereotypical traits and when they simply disclosed that they were autistic. Researchers found that disclosing autism often shifted the models’ recommendations toward stereotypical assumptions about autistic people being introverted, obsessive, socially awkward, or uninterested in romance.

For example, one model recommended declining a social invitation nearly 75 percent of the time when autism was disclosed, compared with about 15 percent of the time when it was not. In dating scenarios, another model recommended avoiding romance or staying single nearly 70 percent of the time after autism disclosure, compared with roughly 50 percent when autism was not mentioned.

The results showed that 11 of the 12 stereotype cues significantly shifted model decisions across at least four of the six AI systems tested.

But the researchers did not stop with statistics.

The team interviewed 11 autistic AI users and showed them examples of how the models responded with and without autism disclosure. Some of them were shocked that the results showed how reliant on stereotypes the LLMs were in giving advice.

One exclaimed: “Are we writing an advice column for Spock here?” — invoking the iconic TV show Star Trek and its half-human, half-Vulcan character, who prioritized logic and reason over emotion. Others described it as restrictive, patronizing, or infantilizing, occasionally in pretty strong language.

But some participants said the more cautious, disclosure-based advice felt validating and supportive.

“One user’s bias could be another user’s personalization,” Rho said.

The same participant could react positively in one situation and negatively in another. That tension led the researchers to what they call a “safety-opportunity paradox.” Advice that feels protective to one user may feel limiting to another.

For Wohn, one of the most troubling discoveries was how difficult it can be for users to see these patterns in real time.

“AI is very good at seeming reliable,” he said. “Its responses are very clean and professional, and they sound right. But when you think about it being deployed systematically, when you think about the kind of systematic biases that are actually shaping its responses, that’s when it starts to get a lot more concerning.”

He compared the problem to AI-generated images.

“They look really clean and polished, and then when you look at the details, things fall apart,” Caleb said. “The surface gloss is beautiful, but looking deeper is getting harder and harder, because models are getting better at masking.”

Team members hope the research will encourage developers to build more transparent AI systems that give users greater control over how personal information shapes responses.

As one participant told the researchers: “I want to have control over how my identity is used.”

Original study : doi.org/10.1145/3772318.379131

10.1145/3772318.379131

"Are we writing an advice column for Spock here?" Understanding Stereotypes in AI Advice for Autistic Users

19-Jan-2026

Keywords

Article Information

Contact Information

Margaret Ashburn
Virginia Tech
mkashburn@vt.edu
Chelsea Seeber
Virginia Tech College of Engineering
chelseab29@vt.edu

Source

How to Cite This Article

APA:
Virginia Tech. (2026, April 16). AI models lean on autism stereotypes when giving social advice, new study finds. Brightsurf News. https://www.brightsurf.com/news/8OMP6ZE1/ai-models-lean-on-autism-stereotypes-when-giving-social-advice-new-study-finds.html
MLA:
"AI models lean on autism stereotypes when giving social advice, new study finds." Brightsurf News, Apr. 16 2026, https://www.brightsurf.com/news/8OMP6ZE1/ai-models-lean-on-autism-stereotypes-when-giving-social-advice-new-study-finds.html.