Epic is leading new efforts to democratize health AI validation

Epic last week announced the availability of new software that could help hospitals and healthcare systems assess and validate artificial intelligence models.

The tool is aimed at healthcare organizations that may otherwise not have the resources to properly validate their AI and machine learning models. which is open source and available for free on GitHub – is designed to help providers make decisions based on their own local data and workflows.

Epic is working with the Health AI Partnership and data scientists from Duke University, the University of Wisconsin and other organizations to test the ‘seismometer’ and develop a shared, standardized language.

The suite of tools could validate AI models that improve patient care, increase health equity and prevent model bias, said Corey Miller, Epic’s vice president of research and development.

We recently spoke with Miller — along with Mark Sendak, chief of public health and data science at Duke Institute for Health Innovation and leader of the Health AI Partnership, and Brian Patterson, UW Health director of medical informatics for predictive analytics and AI — to learn more. about the software and how healthcare organizations can use it.

The three described how the open source tool can help with vendor workflows and clinical use cases, application analysis plans, contributions and improvements, and how the credibility of open source lends itself to scaling the use of AI in healthcare .

A ‘funnel’ that uses local data

A big potential benefit of the validation tool, Miller said, is the ability to use it to dig into data and figure out why a “protected class doesn’t get as good results as other people” and to learn which interventions can improve patient outcomes.

The seismometer — Epic’s first open source tool — is designed so that any healthcare organization can use it to evaluate any AI model, including homegrown models, against local population data, he said. The suite uses standardized evaluation criteria for each data source – any electronic health record or risk management system, Miller said.

“The data schema and funnel simply pull data from each source,” he explains. “But by standardizing the way you get the data out of the system, it gets ingested and put into this notebook, which is essentially the data that you can run code on.”

The resulting dashboards and visualizations are “gold standard tools” already used to evaluate AI models in healthcare.

Epic will not receive any user data, as the intent is to perform the validation locally, but the EHR vendor’s developers and quality assurance staff will review any code proposed for inclusion via GitHub.

Open source to build reliable AI

Although the tool relies on technology Epic has developed over many years, Miller said it took about two months to open-source and build additional components, data schemas and notebook templates.

He said that during that time, Epic worked with data scientists and physicians from various healthcare organizations to test the suite based on their own local predictions.

The goal is to “help with a real problem,” he said.

One tool in the seismometer suite, called the Fairness Audit, is based on a audit toolkit developed by the University of Chicago and Carnegie Mellon to assess a model’s fairness across different protected classes and demographic groups, Miller said.

“Most healthcare organizations today do not have the capabilities or staff for testing and monitoring local models,” Sendak added.

In December, Sendak and Jenny Ma, a senior advisor at the Health and Human Services Office for Civil Rights, said at the ONC 2023 Annual Meeting — in a session focused on addressing racial bias in AI — that it became apparent during the COVID 19 crisis. pandemic that health care resources were unfairly allocated.

“It was a very surprising experience to see firsthand how ill-equipped not only Duke, but many health care systems across the country, were to cater to low-income, marginalized populations,” Sendak had said.

While HAIP and many other healthcare institutions have validated AI, Sendak says this new AI validation tool provides a “standard set of analytics that will now be much more widely accessible” to many other organizations.

“It’s an opportunity to really spread best practices by giving people the tools,” he said.

The University of Wisconsin will collaborate with HAIP, a multi-stakeholder group consisting of ten healthcare organizations and four ecosystem partners who have joined for peer learning and collaboration to create guidelines for using AI in healthcare, and the community of users to improve test open source tools. and make those “apples to apples” comparisons.

“Even though we have a team of data scientists and we’re in one of these well-resourced places, everyone benefits from the tools that make it easier,” says Patterson.

Having the tools for standard processes “would make our lives easier,” but also the engaged community of users validating Epic’s open-source tool together “is one of the things that’s going to build trust among end users,” he added.

Comparisons between organizations

Patterson said the University of Wisconsin team hasn’t picked specific use cases to test with the seismometer, but the plan is to start with the simpler AI models they use.

“None of the models are super simple, but we have a set of models that we use from Epic and some models that our research teams have developed,” he said.

Those that “run on fewer inputs, and specifically models that give a ‘yes, no’ output, whether this condition exists or not, are good models that allow us to generate some early metrics.”

Sendak said HAIP is considering a shortlist of models for its initial evaluation study, which aims to improve the usability of the tools in community and rural settings that are part of the technical assistance program.

“All the models we look at require some degree of localized retraining of the model parameters,” he explained.

“We’re going to look at: how does the out-of-the-box model perform at Duke and the University of Wisconsin. Then, after we do the localization, where we train on local data to update the model, we can say, ‘Okay , how does this localized version compare across sites?'”

“I think ultimately these tools will be most effective on models that are fairly complex,” Patterson added. “And the ability to do that with fewer data science resources at your disposal democratizes that process and hopefully expands that community quite a bit.”

AI validation for compliance

Sendak said the tools can help provider organizations ensure fairness and figure out where they need to improve, noting they have 300 days to comply with the new nondiscrimination rules.

“They need to do risk mitigation to avoid discrimination,” he said. “They will be held liable for discrimination resulting from the use of algorithms.”

The Article 1557 non-discrimination rule, finalized last month by OCR, applies to the range of healthcare operations from screening and risk prediction to diagnosis, treatment planning and resource allocation. The rule adds telehealth and some AI tools and protects more information that could expose providers to liability for healthcare discrimination.

HHS said there were more than 85,000 public comments on nondiscrimination in health programs and activities.

A new, free 12-month technical assistance program through HAIP will help five locations implement AI models, Sendak noted.

“We know that the size of the problem of 1,600 federally qualified health centers and 6,000 hospitals in the United States is an enormous scale at which we need to rapidly disseminate expertise,” he explained.

The HAIP Practice Network will support organizations such as FQHCs and others that lack data science capabilities. Applications must be submitted by June 30.

The selected parties will adopt best practices, contribute to the development of AI best practices and help assess the impact of AI on healthcare.

“That’s where we see a huge need for tools and resources to support the local validation of AI models,” said Sendak.

Andrea Fox is editor-in-chief of Healthcare IT News.
Email: afox@himss.org

Healthcare IT News is a HIMSS Media publication.