CHAI launches open-source AI nutrition label template card for healthcare

The Coalition for Health AI has announced the availability of the open-source version of his Artificial Intelligence Applied Model Card on GitHub. The healthcare industry-led coalition said Thursday that it designed the map to enable healthcare AI developers to provide critical information about how their AI systems are trained.

According to Brian Anderson, CEO of CHAI, parts of the draft open source model map – a “nutrition label” of AI for healthcare – go beyond the U.S. Health and Human Service’s Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency , and the Final Rule on Information Sharing for Certifying Healthcare IT Systems.

The nutrition label also offers opportunities to align with other voluntary standards, such as the National Academy of Medicine’s AI Code of Conduct.

“It’s an important step in starting the conversation between a customer and a supplier, rather than just leaving it to a PowerPoint slide and anecdotal stories to build trust,” Anderson shared. Healthcare IT news on Wednesday.

Built by consensus

Driven by growing demand from both startups and healthcare systems, CHAI said its mission is to ensure that anyone building and using AI in healthcare can make informed decisions and that the open availability of the nutrition label provides greater transparency and confidence in the AI ​​tools they selected.

If we want doctors, nurses and patients to “trust AI models that will be used in more and more healthcare use cases, we need more transparency about how these models are created and how they perform,” he said. “The model map achieves that kind of transparency.”

Any HIT company or healthcare system can use CHAI’s model map in any way they choose, which the coalition says could help streamline procurement processes and improve implementation at scale.

“We want the model map to be widely available and widely used by the supplier community and by the customer community,” Anderson said.

The CHAI nutrition label was created from a collaborative effort among multiple stakeholders to “build a consensus set of definitions about what responsible AI looks like,” Anderson explains. This includes the agreed metrics for evaluation, how to think about performance and assessments of fairness and bias.

The coalition has tried to unite regulators and developers to advance AI standards.

“It’s a real challenge when you bring together vendors, AI model developers and their customers and try to reach consensus on what is the minimum level of transparency that we can agree on and that we need from the developers,” said him about work. of the past eight months.

While CHAI said in October that the certification rubric and model card designs can be expected after incorporating stakeholder feedback by the end of April 2025, the coalition is asking for feedback from testing via GitHub to arrive on or before January 22.

Standards and coordination

While the organization has nearly 3,000 member organizations, making the nutrition label freely available ultimately helps healthcare systems have hundreds, if not thousands, of AI tools. A digital open-source encoded version of the model map is a standard that can be used repeatedly.

“You want to have scalable solutions that can meet the challenge of managing and monitoring multiple AI systems and tools,” Anderson said.

Another thing Anderson spends a lot of time on is tuning.

For example, many in the industry are focused on patients being part of the development process.

“We think this is really important because our patient community groups believe, and I believe, if we’re going to put patients at the center of this, developers have to do that from the beginning” in a way “that meets patients where they are,” he says.

The CHAI AI Model Card also includes a section on the National Academy of Medicine’s AI Code of Conduct, which is not part of the HTI rule.

“We believe that by ensuring that suppliers have the opportunity to articulate, if they believe that the development of their model is in line with the AI ​​Code of Conduct that NAM describes, that should be there, and that they should have the opportunity to express their perspective on whether they believe the models developed in accordance with it.”

Andrea Fox is editor-in-chief of Healthcare IT News.
Email: afox@himss.org

Healthcare IT News is a HIMSS Media publication.