Healthcare sector is aligning with AI model maps and other innovations
From building consensus on the definition of responsible AI to honing semi-autonomous applications of AI in healthcare, industry and government appear to be moving towards shared values even in these polarized times, now we face 2025, says Coalition for Healthcare AI CEO Brian Anderson.
“We need our policymakers and regulators to understand these kinds of frameworks that are being developed in the private sector about what responsible AI looks like in healthcare, and then develop the regulatory frameworks around that,” Anderson explains. Healthcare IT news.
Congruence so far
Much thought has been given by the technology industry and federal healthcare regulators to AI model cards, or “AI food labels” – a digestible form of communication used to communicate key aspects of AI model development to users. identify.
As the CHAI released its open-source version of its draft AI model map on Thursday, we spoke with Anderson about the coalition’s recent experiences and his insights into what the near future might bring as it develops a public-private framework for safely spreading AI in healthcare.
“It’s great to see the alignment between where the private sector innovation community is going and where the public sector regulatory community is going,” he said.
While CHAI is seeking feedback this month on its open-source draft model map – which “stems from” the Office of the National Coordinator for Healthcare Technology’s Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency and Information rule – with With a plan to roll out an update sometime in the next six months, Anderson said he’s hopeful that several regulators are moving in the same direction and will continue to do so.
In particular, he mentioned some degree of coordination on AI requirements with medical device regulators.
Earlier this week, the U.S. Food and Drug Administration included a sample voluntary AI model map in its draft recommendations for the total product lifecycle (design, development, maintenance, and documentation) for AI-enabled devices.
“One of the exciting things about looking at the FDA sample model chart, and then certainly looking at ONC’s HTI-1 rule, is that CHAI’s model chart is very much aligned with both of those,” Anderson said.
According to the FDA, a model card for AI-based medical devices can address communication issues in presenting important information to healthcare users – patients, physicians, regulators and researchers – and the public.
“Research has shown that using a model chart can increase user confidence and understanding,” the FDA said in its draft guidance. “They provide a means to consistently summarize key aspects of AI devices and can be used to succinctly describe their features, performance and limitations.”
As an example of AI regulation in healthcare, this is an indication of where FDA regulators are going in their work to build trust around the use of AI.
“However, private and public sector stakeholder groups must work together to inform each other,” Anderson said.
When asked, he noted that the incoming administration “and every leader I’ve spoken to in the Senate and the House of Representatives is very interested in understanding how they can work together in a public-private partnership with organizations like CHAI.”
Now that the door is open to move forward with the more stringent AI tasks in healthcare – such as government-industry coordination on AI assurance labs and how they will function – “there is still work to be done,” Anderson said.
“We need time to do that, and appreciating the new administration for being willing to work with us – and hopefully providing that time – I find very refreshing and very exciting.”
Annual label update and IRL use
Anderson said CHAI’s model map is intended to be “a living, breathing document as new capabilities emerge, especially in the generative AI space.”
It is “very likely that the metrics and the methodologies we use to evaluate these emerging capabilities will have to change or be created,” he said.
Even before the FDA released its draft guidance for the entire medical device life cycle, it completed its Predetermined Change Control Plan review for AI and machine learning submissions – without creating a need for new marketing submissions.
“As we think about the different sections of the model map, different data will need to be included – different evaluation results, different metrics… different types of use cases,” etc., Anderson said.
“That kind of flexibility will be very important,” he added, noting that an AI model or system’s nutrition label will need to be updated regularly, “certainly, at least on an annual basis.”
For healthcare providers, there is a great deal of complexity to consider when using AI-enabled clinical decision support tools to minimize errors or mistakes.
“Imperfect transparency will be something we will struggle with and work through again,” he stressed.
Whether or not a model is trained on a specific set of features that may relate to a particular patient may not be noted on easy-to-use model cards.
“You can include all the information under the sun in these model cards, but the seller community would be at risk of making (intellectual property) public,” he said. “So it’s a balance between how you protect the seller’s IP, but give the customer – the doctor in this case – the necessary information to make the right decision about whether or not they should use that model with the patient they have in front of them,” Anderson said.
“The causal relationship is very profound as to how that might affect a particular outcome for the patient in front of you,” he acknowledged.
Bringing others to the AI evaluation table
While HTI-1’s 31 categorical areas – focused on electronic health records and other certified healthcare IT – are “really a good starting point,” it’s not enough for the different use cases of AI – especially in direct-to-consumer space, Anderson said.
“The model cards we’re developing are intended to be used quite broadly across these different use cases, and in the consumer space, especially with generative AI, there’s going to be a whole bunch of new use cases emerging over the next year. ”, he explained.
However, over the next two to five years, evaluating AI models in healthcare will become even more complex, raising questions about how they define “human flourishing.”
Anderson said he believes use cases will be closely tied to health AI agents, and that developing trust frameworks around them will require the support of “ethicists, philosophers, sociologists and spiritual leaders” to help advise technologists and AI experts in the field of healthcare. think about an evaluation framework for those instruments.
“It’s going to be a real challenge to develop the kind of framework for evaluation in that agentic AI future,” he said. “It’s a very intimate personal space, and how do we build that trust with those models? How do we evaluate those models?”
Anderson said that in the coming year, CHAI will be spearheading a “very deliberate effort to bring together community members and stakeholders that you wouldn’t necessarily first think of as the types of stakeholders you would bring into an effort like this. ”
“We really need to make sure that these models align with our values, and we don’t have a section on how to evaluate a model. I don’t know how to do that yet. I don’t think anyone does yet.”
Andrea Fox is editor-in-chief of Healthcare IT News.
Email: afox@himss.org
Healthcare IT News is a HIMSS Media publication.