Bluesky Facebook Reddit Email

Medical AI systems failing to disclose inaccurate race, ethnicity information

06.09.25 | University of Minnesota

Garmin GPSMAP 67i with inReach

Garmin GPSMAP 67i with inReach provides rugged GNSS navigation, satellite messaging, and SOS for backcountry geology and climate field teams.

The inaccuracy of race and ethnicity data found in electronic health records (EHRs) can negatively impact patient care as artificial intelligence (AI) is increasingly integrated into healthcare. Because hospitals and providers inconsistently collect such data and struggle to accurately classify individual patients, AI systems trained on these datasets can inherit and perpetuate racial bias.

In a new publication in PLOS Digital Health , experts in bioethics and law call for immediate standardization of methods for collection of race and ethnicity data, and for developers to warranty race and ethnicity data quality in medical AI systems. The research synthesizes concerns about why patient race data in EHRs may not be accurate, identifies best practices for healthcare systems and medical AI researchers to improve data accuracy, and provides a new template for medical AI developers to transparently warrant the quality of their race and ethnicity data

Lead author Alexandra Tsalidis , MBE, notes that “If AI developers heed our recommendation to disclose how their race and ethnicity data were collected, they will not only advance transparency in medical AI but also help patients and regulators critically assess the safety of the resulting medical devices. Just as nutrition labels inform consumers about what they’re putting into their bodies, these disclaimers can reveal the quality and origins of the data used to train AI-based health care tools.”

“Race bias in AI models is a huge concern as the technology is increasingly integrated into healthcare,” senior author Francis Shen , JD, PhD, says. “This article provides a concrete method that can be implemented to help address these concerns.”

While more work needs to be done, the article offers a starting point suggests co-author Lakshmi Bharadwaj, MBE. “An open dialogue regarding best practices is a vital step, and the approaches we suggest could generate significant improvements."

The research was supported by the NIH Bridge to Artificial Intelligence (Bridge2AI) program, and by an NIH BRAIN Neuroethics grant (R01MH134144).

- 30 -

About the Consortium on Law and Values in Health, Environment & the Life Sciences
Founded in 2000, the Consortium on Law and Values in Health, Environment & the Life Sciences links 22 member centers working across the University of Minnesota on the societal implications of biomedicine and the life sciences. The Consortium publishes groundbreaking work on issues including genetic and genomic research, oversight of nanobiology, cutting-edge neuroscience, and ethical issues raised by advances in bioengineering.

PLOS Digital Health

10.1371/journal.pdig.0000807

Data/statistical analysis

People

Standardization and accuracy of race and ethnicity data: Equity implications for medical AI

29-May-2025

Keywords

Article Information

Contact Information

Rachel Cain
University of Minnesota
rcain@umn.edu

Source

How to Cite This Article

APA:
University of Minnesota. (2025, June 9). Medical AI systems failing to disclose inaccurate race, ethnicity information. Brightsurf News. https://www.brightsurf.com/news/147NK2O1/medical-ai-systems-failing-to-disclose-inaccurate-race-ethnicity-information.html
MLA:
"Medical AI systems failing to disclose inaccurate race, ethnicity information." Brightsurf News, Jun. 9 2025, https://www.brightsurf.com/news/147NK2O1/medical-ai-systems-failing-to-disclose-inaccurate-race-ethnicity-information.html.