Faculty mentor/PI email address

Brandon.Velazquez@hudsonregionalhealth.com

Keywords

Automation bias, artificial intelligence, radiology, diagnostic accuracy, meta-analysis, patient safety

Date of Presentation

5-6-2026 12:00 AM

Poster Abstract

Background: Over 1,000 FDA-cleared AI algorithms are now available in radiology. A key assumption is that radiologists serve as the final safety net, capable of rejecting incorrect AI outputs. However, automation bias — the tendency to follow automated recommendations uncritically — may compromise this oversight.

Objective: To systematically review evidence on automation bias in radiology and quantify the degree to which radiologists follow incorrect AI recommendations.

Methods: Systematic review of PubMed (April 2016–April 2026) following PRISMA guidelines. Five studies met inclusion criteria encompassing mammography, chest radiography, and MRI. Meta-analysis performed using random-effects model with odds ratios calculated from raw data.

Results: Radiologists followed incorrect AI recommendations at high rates across all studies. Pooled OR = 4.89 (95% CI: 2.14–11.18, p < 0.001). Inexperienced radiologists were most susceptible, with accuracy dropping from 79.7% to 19.8% when AI was incorrect (OR = 15.57). Interventions including explainability inputs and attitudinal priming failed to significantly reduce automation bias.

Conclusions: Automation bias is consistent across radiology subspecialties and experience levels. These findings underscore the need for training programs that build independent diagnostic skills before introducing AI-assisted interpretation.

Share

COinS
 
May 6th, 12:00 AM

A Systematic Review of Artificial Intelligence and Automation Bias in Radiology: Implications for Diagnostic Accuracy

Background: Over 1,000 FDA-cleared AI algorithms are now available in radiology. A key assumption is that radiologists serve as the final safety net, capable of rejecting incorrect AI outputs. However, automation bias — the tendency to follow automated recommendations uncritically — may compromise this oversight.

Objective: To systematically review evidence on automation bias in radiology and quantify the degree to which radiologists follow incorrect AI recommendations.

Methods: Systematic review of PubMed (April 2016–April 2026) following PRISMA guidelines. Five studies met inclusion criteria encompassing mammography, chest radiography, and MRI. Meta-analysis performed using random-effects model with odds ratios calculated from raw data.

Results: Radiologists followed incorrect AI recommendations at high rates across all studies. Pooled OR = 4.89 (95% CI: 2.14–11.18, p < 0.001). Inexperienced radiologists were most susceptible, with accuracy dropping from 79.7% to 19.8% when AI was incorrect (OR = 15.57). Interventions including explainability inputs and attitudinal priming failed to significantly reduce automation bias.

Conclusions: Automation bias is consistent across radiology subspecialties and experience levels. These findings underscore the need for training programs that build independent diagnostic skills before introducing AI-assisted interpretation.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.