CHAI: Look for the healthcare AI model’s ‘nutrition labels’ coming soon
The Coalition for Health AI unveiled news plans last Friday for how it would certify independent artificial intelligence labs.
The draft frameworks come as the group — made up of healthcare heavyweights like Mayo Clinic, Penn Medicine and Stanford, along with Amazon, Google, Microsoft and other big tech giants — sets a timeline for its goal to deliver the results of standardize testing laboratories. by assessing AI and ML models with so-called CHAI model cards – which the group compares with ingredient and nutrition labels on foods.
CHAI leaders say the certification rubric and model card designs can be expected by the end of April 2025 – after incorporating stakeholder feedback from members, partners and the public.
WHY IT’S IMPORTANT
The design of the CHAI certification program was developed in collaboration with the ANSI National Accreditation Board and several emerging quality assurance laboratories using ISO 17025 – it is the predominant standard for testing and calibration laboratories worldwide – and requires, among other things, mandatory disclosure of conflicts of interest between assurance laboratories and model developers, and the protection of data and intellectual property. (This standard was also used for ONC’s electronic health record certification program.)
The testing and certification program also includes data quality and integrity requirements derived from the FDA’s guidance on using high-quality real-world data, CHAI officials note, as well as testing and evaluation metrics sourced from the coalition’s various working groups – all in line with the National Academy of Medicine’s AI Code of Conduct.
The draft model map – designed by a working group made up of experts from regional healthcare systems, electronic health record providers, medical device manufacturers and others – provides a standardized template designed to demonstrate a degree of transparency about algorithms, with certain basic information is presented to help end users assess the performance and security of AI tools.
That information includes the identity of the AI developer, the model’s intended use, targeted patient populations, and more. Other performance metrics include security and compliance accreditations and maintenance requirements. The cards also include information about known risks and out-of-scope applications, biases and other ethical considerations.
CHAI says it will continue to work with healthcare stakeholders – including patient advocates, small and rural healthcare systems and technology startups for additional guidance on model card development. The group is also seeking feedback through her security laboratory And model card forms.
“The rapid evolution of AI in healthcare has created a landscape that can feel unregulated and fragmented,” said Demetri Giannikopoulous, CHAI model map working group member, head of transformation at Aidoc, who helped design and develop the maps .
“By establishing a common framework that aligns with federal regulations, we are going beyond theoretical discussions and building the foundation for scalable, reliable and ethical AI solutions that can be applied across the healthcare ecosystem.”
THE BIG TREND
Since its founding in 2021, CHAI, along with others with similar goals, has worked to be the go-to source for trustworthy AI – whether for patient safety, privacy and security, fairness and equity, model transparency, or usability and effectiveness.
As technology evolves and proliferates, so does the coalition – bringing together a range of stakeholders from hospitals and healthcare systems, Big Tech, startups, government agencies, and patient advocates. privacy and security, fairness, transparency, usability and safety of AI algorithms.
In an interview in Boston last month at the HIMSS AI in Healthcare Forum, CHAI CEO Dr. Brian Anderson notes that the group, which he co-founded as a side project while he was MITER’s chief digital health clinician, now has more than 4,000 members from across the healthcare and technology ecosystem, a big increase from the 1,300 earlier this year.
Accountability and transparency around AI models have been core to CHAI’s mission since its founding in 2021. To that end, it has been working on initiatives such as the Blueprint for Trustworthy AI Implementation Guidance and Assurance in 2023 and the Draft Framework for Responsible Development and implementation in June 2024.
The new model cards are designed to meet the HTI-1 requirements promulgated by ONC earlier this year, and are intended as an easy-to-read starting point for organizations assessing AI models during the procurement process, and for EHR vendors looking to comply with the Health IT Certification Program.
Anderson noted that Micky Tripathi, assistant secretary for technology policy, national coordinator for health information technology and acting chief artificial intelligence officer at HHS, has previously said that industry and the private sector should be the ones determining what goes into AI model cards.
“We’ve taken on that task,” says Anderson, who says CHAI is actively seeking the end-user perspective and seeking a greater variety of stakeholders and use cases as it considers the myriad applications of AI in healthcare.
The goal is to build an “easily digestible” yet “technically specific, detailed model map that the industry (can) focus around,” he said.
As for AI assurance labs, Anderson notes that almost every other sector of the economy uses similar independent evaluators, and healthcare should have the same.
“We want to build a network of laboratories that are reliable, competent and capable – that customers can turn to, that healthcare systems can rely on and technology providers can rely on as well,” he said.
ON THE RECORD
“This has been a pivotal year for CHAI on our journey to enable reliable, independent assurance of AI solutions with local monitoring and validation,” Anderson said in a statement. “We are thrilled with the progress of our CHAI working groups, which represent a diversity of perspectives and expertise across the health ecosystem.
“These frameworks for certification and basic transparency are building blocks of responsible AI in healthcare. They will help streamline the path for AI innovation, build trust with patients and physicians, and position healthcare systems and solution innovators ahead of emerging state and federal regulations.”
Mike Miliard is editor-in-chief of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.