The Gordon and Betty Moore Foundation has awarded $1.25 million in funding to a joint project between Vanderbilt University Medical Center and Duke University School of Medicine to collaborate with the Coalition for Health AI and the University of Iowa to develop a Develop a model framework and improve oversight of AI technology by healthcare systems.
WHY IT MATTERS
While healthcare systems are actively developing, deploying, and using algorithms, VUMC and Duke researchers say gaps in oversight, resources, and organizational infrastructure at healthcare organizations are hampering the safety, fairness, and quality of AI use in clinical decisions in endanger.
They aim to identify the essential capabilities that healthcare systems must develop to ensure they are well prepared for the reliable use of AI models.
“This work will deliver new tools and capabilities that our healthcare system needs to ensure we select, deploy and monitor health AI to make healthcare safer, more effective, more ethical and more equitable for all,” said Dr. Peter Embí, one of the VUMC employees. project leaders, in Thursday’s subsidy announcement.
The project’s goal to build an empirically supported maturity model for AI in healthcare could ultimately help healthcare systems comprehensively document which algorithms are deployed, what their values are, who controls them, and who is responsible for their use of it.
Over the next year, the VUMC and Duke teams will engage various stakeholders from CHAI and various healthcare systems to outline the key components that healthcare systems must have in place for the reliable implementation of AI.
“Creating a framework for a maturity model for AI in healthcare will enable healthcare systems to identify their strengths and weaknesses when acquiring and deploying AI solutions, ultimately driving the transformation of healthcare for the better,” says Nicoleta Economou, director of algorithm-based clinical decision support supervision. at Duke AI Health.
THE BIG TREND
CHAI, which released its Blueprint for AI in Healthcare earlier this year, is taking a patient-centric approach to removing barriers to trust in AI and machine learning. It says it aims to align healthcare AI standards with reporting, to help patients and their doctors better evaluate the algorithms that contribute to their care.
“Transparency and trust in AI tools that will influence medical decisions are absolutely paramount for patients and physicians,” said Dr. Brian Anderson, head of digital health at MITER, co-founder of CHAI, after the blueprint was published.
In August, Anderson discussed the importance of AI testability, transparency, and usability, as outlined in his guide to safe and reliable AI implementation.
He told Healthcare IT news that the White House, government agencies and other CHAI partners bring together the private and public sectors, communities and patients to work on the metrics, measurements and tools to manage the product lifecycles of AI models over time.
“There are a lot of unanswered questions,” he said of the “breathtaking” speed of AI innovation.
“For example, on the regulatory front, I’m concerned that collectively, not just our government, but society in general, we just haven’t had the opportunity to understand what is an agreed upon set of guardrails that we need to put in place. around some of these models has been introduced because health has such a major impact.
“We are all patients,” he said.
ON THE RECORD
“To realize the full potential of AI technologies, healthcare systems must develop a more mature process for implementing these tools,” said Michael Pencina, chief data scientist at Duke Health and vice dean for data science at the Duke School of Medicine. the explanation.
“Improving oversight of AI technology in healthcare systems is critical to ensuring the safety and effectiveness of patient care.”
Andrea Fox is editor-in-chief of Healthcare IT News.
Email: afox@himss.org
Healthcare IT News is a HIMSS Media publication.