Yale research shows how AI biases are exacerbating health care inequalities

A new research report from the Yale School of Medicine provides a close-up look at how biased artificial intelligence can influence clinical outcomes. The research focuses specifically on the different stages of AI model development and shows how data integrity issues can impact healthcare equity and quality of care.

WHY IT’S IMPORTANT
Published earlier this month in PLOS Digital Healththe study provides both real and hypothetical illustrations of how AI biases are negatively impacting healthcare – not just at the point of care, but at every stage of medical AI development: training data, model development, publication and implementation and more .

“Bias in; bias out,” the study’s senior author John Onofrey, assistant professor of radiology and biomedical imaging and urology at the Yale School of Medicine, said in a news release.

“Having worked in the machine learning/AI field for many years, the idea that biases exist in algorithms is not surprising,” he said. “However, listing all the possible ways biases can enter the AI ​​learning process is mind-boggling. This makes mitigating biases a daunting task.”

As the research shows, biases can show up virtually anywhere in the algorithm development pipeline.

It can occur in “data functions and labels, model development and evaluation, implementation and publication,” researchers say. “Insufficient sample sizes for certain patient groups can result in suboptimal performance, algorithm underestimation, and clinically meaningless predictions. Missing patient findings can also lead to biased model behavior, including captureable but not randomly missing data, such as diagnosis codes, and data that would not normally be are available.” or not easy to capture, such as social determinants of health.”

Meanwhile, “expertly annotated labels used to train supervised learning models may reflect implicit cognitive biases or substandard care practices.” Overreliance on performance data during model development can obscure biases and reduce the clinical usefulness of a model. When applied to data outside the training cohort, model performance may degrade from previous validation and may do so differently between subgroups.”

And of course, the way clinical end users interact with AI models can also introduce biases in itself.

Ultimately, “AI models are being developed and published here, and by whom, that will influence the trajectories and priorities of future medical AI development,” say Yale researchers.

And they note that all efforts to reduce that bias – “collection of large and diverse data sets, statistical methods for undermining bias, rigorous model evaluation, emphasis on model interpretability, and standardized reporting of bias and requirements on area of ​​transparency” – must be carried out carefully, with a keen eye on how these guardrails will work to prevent negative impacts on patient care.

“Prior to implementation in the clinical setting, rigorous validation through clinical trials is critical to demonstrate unbiased application,” they said. “Addressing biases in model development stages is critical to ensuring that all patients benefit equally from the future of medical AI.”

But the report ‘Bias in medical AI: implications for clinical decision making,” offers some suggestions to mitigate that bias, toward the goal of improving healthcare equity.

For example, previous research has shown that using race as a factor in estimating kidney function can sometimes lead to longer wait times for black transplants to appear on transplant lists. Yale researchers make several recommendations to help future AI algorithms use more precise measurements, such as zip code and other socioeconomic factors.

ON THE RECORD
“Better capturing and using social determinants of health in medical AI models for predicting clinical risk will be paramount,” said James L. Cross, a first-year medical student at Yale School of Medicine and first author of the investigation, in a statement.

“Bias is a human problem,” added Yale associate professor of radiology and biomedical imaging and study co-author Dr. Michael Choma added. “When we talk about ‘bias in AI,’ we must remember that computers learn from us.”

Mike Miliard is editor-in-chief of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com

Healthcare IT News is a HIMSS publication.