Bluesky Facebook Reddit Email

Who’s to blame when AI makes a medical error?

03.24.25 | University of Texas at Austin

Kestrel 3000 Pocket Weather Meter

Kestrel 3000 Pocket Weather Meter measures wind, temperature, and humidity in real time for site assessments, aviation checks, and safety briefings.

Assistive artificial intelligence technologies hold significant promise for transforming health care by aiding physicians in diagnosing, managing, and treating patients. However, the current trend of assistive AI implementation could actually worsen challenges related to error prevention and physician burnout, according to a new brief published in JAMA Health Forum Friday.

The brief, written by researchers from the Johns Hopkins Carey Business School, Johns Hopkins Medicine, and The University of Texas at Austin McCombs School of Business, explains that there is an increasing expectation of physicians to rely on AI to minimize medical errors. However, proper laws and regulations are not yet in place to support physicians as they make AI-guided decisions, despite the fierce adoption of these technologies among health care organizations.

The researchers predict that medical liability will depend on whom society considers at fault when the technology fails or makes a mistake, subjecting physicians to an unrealistic expectation of knowing when to override or trust AI. The authors warn that such an expectation could increase the risk of burnout and even errors among physicians.

“AI was meant to ease the burden, but instead, it’s shifting liability onto physicians — forcing them to flawlessly interpret technology even its creators can’t fully explain,” said Shefali Patil, visiting associate professor at the Carey Business School and associate professor at the University of Texas McCombs School of Business. “This unrealistic expectation creates hesitation and poses a direct threat to patient care.”

The new brief suggests strategies for health care organizations to support physicians by shifting the focus from individual performance to organizational support and learning, which may alleviate pressure on physicians and foster a more collaborative approach to AI integration.

“Expecting physicians to perfectly understand and apply AI alone when making clinical decisions is like expecting pilots to also design their own aircraft — while they’re flying it,” said Christopher Myers, associate professor and faculty director of the Center for Innovative Leadership at the Carey Business School. “To ensure AI empowers rather than exhausts physicians, health care organizations must develop support systems that help physicians calibrate when and how to use AI so they don’t need to second-guess the tools they’re using to make key decisions.”

The full viewpoint is available on the JAMA Health Forum media site .

JAMA Health Forum

10.1001/jamahealthforum.2025.0106

Commentary/editorial

People

Calibrating AI Reliance—A Physician’s Superhuman Dilemma

21-Mar-2025

Keywords

Article Information

Contact Information

Judie Kinonen
University of Texas at Austin
judie.kinonen@mccombs.utexas.edu

Source

How to Cite This Article

APA:
University of Texas at Austin. (2025, March 24). Who’s to blame when AI makes a medical error?. Brightsurf News. https://www.brightsurf.com/news/1GR4ZPW8/whos-to-blame-when-ai-makes-a-medical-error.html
MLA:
"Who’s to blame when AI makes a medical error?." Brightsurf News, Mar. 24 2025, https://www.brightsurf.com/news/1GR4ZPW8/whos-to-blame-when-ai-makes-a-medical-error.html.