Bluesky Facebook Reddit Email

How AI is integrated into clinical workflow lowers medical liability perception

03.10.26 | Penn State

Meta Quest 3 512GB

Meta Quest 3 512GB enables immersive mission planning, terrain rehearsal, and interactive STEM demos with high-resolution mixed-reality experiences.

HERSHEY, Pa. — Artificial intelligence (AI) is changing the field and practice of medicine, including legal liability and the perception of who is at fault when a patient experiences harm.

“AI holds promise to improve the quality and safety of health care and to reduce errors and patient harm, but the risk of legal liability is a potential barrier for investment and development of this technology as well as the quality of care,” said Michael Bruno , professor of radiology and of medicine at Penn State College of Medicine.

Now, Bruno, working alongside a team of researchers from Brown University and Seton Hall University School of Law, found that the understanding of physician liability is influenced by the way in which AI is integrated into a clinician’s workflow. The study was published today (MARCH 10) in the journal Nature Health .

The researchers presented mock jurors with a hypothetical malpractice case where a patient suffered irreversible brain damage because a radiologist didn’t detect a brain bleed from a computerized tomography (CT) scan, even though AI correctly identified the scan as abnormal. They found that mock jurors were almost 50% more likely to side with the plaintiff and against the radiologist when the radiologist only reviewed the CT scan once after AI flagged the scan compared to when the radiologist read the scan twice, once before receiving the AI feedback and once after.

Nearly a year ago, Bruno convened a two-day Research Summit on “Human Factors and Artificial Intelligence in Healthcare,” on the Penn State College of Medicine campus, bringing together an international group of multidisciplinary experts from academia and industry to establish future research priorities for the field of Human-AI collaboration.

“If you're a stakeholder trying to figure out whether to purchase an AI product at a hospital, whether to direct your doctors to follow a certain workflow, or whether to settle a case because an error has already occurred, this kind of information is vital because you can weigh the cost versus the benefits in a far more informed way,” said Brian Sheppard , professor of law at Seton Hall University School of Law and a co-author on the paper.

The researchers explained that they chose to look at a radiology-based case because the integration of AI into radiology practice is further along than in other areas of medicine, which means that the physician-AI interaction is a plausible scenario. Since most medical malpractice cases are settled out of court and out of the public record or, if they do go to court, take years to litigate, using a hypothetical case allows the researchers to gather information that would otherwise be unavailable.

For this study, the team recruited 282 participants who were randomized to read one of two scenarios. In the first scenario, AI flagged the case as abnormal, and the radiologist then reviewed the images once, concluding that there was no evidence of bleeding in the brain. In the second, the radiologist reviewed and interpreted the CT twice, once before receiving feedback from the AI system and then a second time after AI flagged the case as abnormal. In both instances, the radiologist concluded that there was no evidence of a brain bleed. After reading the case, participants were asked if the radiologist met their duty of care to the patient.

Nearly 75% of the mock jurors found that the radiologist did not meet their duty of care when they reviewed the CT once. However, that dropped to 53% when the radiologist reviewed the CT twice. The findings suggest that changes to radiologist workflow — when and the number of times they review and interpret imaging tests when AI is involved — could reduce legal risk, the researchers explained. However, these changes aren’t without costs.

“There are all these biases that are incentivizing radiologists not to disagree with AI because the cost of disagreeing with it is too high. If you disagree with AI and you're wrong, this will be used against you,” said co-author Grayson Baird , associate professor of radiology at Brown University and director of the Brown Radiology, Psychology, and Law Lab and the Brown Radiology Human Factors Lab . “The cost is then passed on to the patient who now has to deal with the anxiety and discomfort from follow-up care, imaging or tests. We all pay for it, too, because the cost of healthcare increases.”

While the study didn’t explore the underlying reasons behind the relationship between AI and perception of legal liability, the researchers explained that the findings show that how people determine fault when AI systems are used depends on context.

This study builds on prior work conducted by the research team where they, using the same hypothetical case, found that mock jurors were less likely to find a radiologist liable when the radiologist agreed with an AI interpretation versus when they disagreed. The perception of legal liability was also mitigated when AI error rates were presented to mock jurors versus when they were unknown to the jurors. In another study , other researchers found that AI can impact physician decision making, prompting physicians to change their mind on treatment decisions.

“How people perceive AI, and how their perception impacts human liability, is evolving quickly along with the technology. It’s something that we need to pay close attention to,” said corresponding author Michael Bernstein , associate professor of radiology at Brown University and associate director of the Brown Radiology Human Factors Lab.

10.1038/s44360-026-00085-2

Experimental study

People

The radiologist–AI workflow and the risk of medical malpractice claims

10-Mar-2026

Keywords

Article Information

Contact Information

Christine Yu
Penn State
cmy5406@psu.edu

Source

How to Cite This Article

APA:
Penn State. (2026, March 10). How AI is integrated into clinical workflow lowers medical liability perception. Brightsurf News. https://www.brightsurf.com/news/LPENMYN8/how-ai-is-integrated-into-clinical-workflow-lowers-medical-liability-perception.html
MLA:
"How AI is integrated into clinical workflow lowers medical liability perception." Brightsurf News, Mar. 10 2026, https://www.brightsurf.com/news/LPENMYN8/how-ai-is-integrated-into-clinical-workflow-lowers-medical-liability-perception.html.