JAMA research: AI model explanations don't meaningfully mitigate against bias
In a study published this month in JAMA, computer scientists and clinicians from the University of Michigan concluded explored the use of artificial intelligence to help diagnose hospitalized patients.
They were particularly curious about how diagnostic accuracy is affected if doctors understand how the AI models they use work – and how they might be biased or limited.
Using image-based AI model explanations could help providers discover algorithms that may be systematically biased and therefore inaccurate. But the researchers found that such explanatory guides “did not help clinicians identify systematically biased AI models.”
In their efforts to assess how systematically biased AI affects diagnostic accuracy – and whether image-based model explanations can reduce errors – the researchers designed a randomized clinical vignette trial in 13 US states involving hospital physicians, nurses and physician assistants.
These physicians were presented with nine clinical vignettes of patients hospitalized with acute respiratory failure, including the symptoms they presented, physical examination, laboratory results, and chest x-rays.”
They were then asked to “determine the likelihood of pneumonia, heart failure, or chronic obstructive pulmonary disease as the underlying cause(s) of each patient's acute respiratory failure,” researchers said.
Doctors were first shown two vignettes without input from the AI model. They were then randomized to see six vignettes with AI model input, with or without AI model explanations. Of these six vignettes, three contain standard model predictions, and the other three contain systematically biased model predictions.
Among the study's findings: “Diagnostic accuracy increased significantly by 4.4% when physicians reviewed a patient clinical vignette with standard AI model predictions and model explanations compared to baseline accuracy.”
On the other hand, however, accuracy dropped by more than 11% when physicians were presented with systematically biased AI model predictions, and model explanations did not protect against the negative effects of such inaccurate predictions.
As the researchers found, while standard AI models can improve diagnostic accuracy, systematic bias reduced it, “and commonly used image-based AI model explanations have not mitigated this detrimental effect.”
Mike Miliard is editor-in-chief of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.