CHAI releases a responsible AI framework for public comment

The Coalition for Health AI has released its draft framework for the responsible development and deployment of artificial intelligence in healthcare.

The framework – consisting of a standard guide and a set of checklists – was developed over more than two years, according to CHAI, which says it addresses an urgent need for consensus standards and practical guidance to ensure AI in healthcare benefits all populations, including groups from underserved and underrepresented communities.

It is now open for one 60-day public review and comment period.

WHY IT MATTERS

Launched in December 2021, CHAI previously released a Blueprint for Trustworthy AI in April 2023 as a consensus-based effort among experts from leading academic medical centers, regional healthcare systems, patient advocates, federal agencies and other healthcare stakeholders and technology.

CHAI said in its announcement Wednesday that a new guide combines principles from the Blueprint with guidance from federal agencies, while the checklists provide actionable steps for applying assurance standards in daily operational processes.

Functionally, the Assurance Standards Guide outlines industry-agreed standards for deploying AI in healthcare, and Assurance Reporting Checklists can help identify use cases, develop AI products for healthcare, and then deploy and monitor them.

The principles underlying the design of these documents are consistent with the National Academy of Medicine’s AI Code of Conduct, the White House Blueprint for an AI Bill of Rights, various National Institute of Standards and Technology frameworks, as well as the Department of Health and Human Services Administration’s Cybersecurity Framework for strategic preparedness and responses, according to CHAI.

Dr. Brian Anderson, CEO of CHAI, emphasized the importance of the public review and comment period to help ensure effective, useful, safe, secure, fair and equitable AI.

“This step will demonstrate that a consensus-based approach across the health system can both support innovation in healthcare and build confidence that AI can serve us all,” he said in a statement.

The guide would provide a common language and understanding of the AI ​​lifecycle in healthcare, and explore best practices in designing, developing and deploying AI within healthcare workflows, while the draft checklists enable the independent assessment of AI solutions healthcare fields throughout their life cycle to ensure they are effective, valid, safe and minimize bias.

The framework presents six use cases to demonstrate considerations and best practices:

  1. Predictive EPD risk (exacerbation of asthma in children)
  2. Imaging diagnostics (mammography)
  3. Generative AI (EPR query and extraction)
  4. Outpatient declaration (care management)
  5. Clinical operations and administration (prior authorization with medical coding)
  6. Genomics (precision oncology with genomic markers)

Public reporting of the results of applying the checklists would ensure transparency, CHAI noted.

The coalition’s editorial team reviewed the guide and checklists, which were presented at a public forum at Stanford University in May.

One CHAI participant, Ysabel Duron, founder and executive director of the Latina Cancer Institute, said in a statement that the collaboration and engagement of diverse and multisectoral patient voices are needed to “protect against bias, discrimination, and unintended harmful outcomes.”

“AI could be a powerful tool in overcoming barriers and closing the gap in healthcare access for Latino patients and medical professionals, but it could also do harm if we don’t have a seat at the table,” she said in CHAI’s announcement.

THE BIG TREND

First addressed by the House Energy and Commerce Health Subcommittee during a hearing on the U.S. Food and Drug Administration’s regulation of medical devices and other biological products last month, more lawmakers are now asking questions of the FDA and the Centers for Medicare & Medicaid Services on its use and oversight of healthcare AI.

The hill More than fifty lawmakers in both the House of Representatives and the Senate reported this on Tuesday called for more surveillance of artificial intelligence in Medicare Advantage coverage decisions while STAT said it had a letter from the Republicans criticism of the FDA’s collaboration with CHAI.

Dr. Mariannette Jane Miller-Meeks, R-Iowa, asked the FDA at the May 22 hearing whether it would outsource AI certification to CHAI, a group she said was not diverse and “showed clear signs of regulatory oversight efforts “.

“It doesn’t pass the smell test,” she said.

Dr. Jeff Shuren, director of the Center for Devices and Radiological Health, told Miller-Meeks that CDRH works with CHAI and other AI industry coalitions as a federal liaison, and that it does not engage the organization to review applications.

“We also told CHAI that they need more representation in the medical sector,” Shuren added.

ON THE RECORD

“Shared ways to quantify the usefulness of AI algorithms will ensure we realize the full potential of AI for patients and healthcare systems,” said Dr. Nigam Shah, co-founder and board member of CHAI and chief data scientist for Stanford Healthcare. in a statement. “The Guide represents the collective consensus of our 2,500-member CHAI community, including patient advocates, physicians and technologists.”

Andrea Fox is editor-in-chief of Healthcare IT News.
Email: afox@himss.org

Healthcare IT News is a publication of HIMSS Media.

The HIMSS AI in Healthcare Forum will take place September 5-6 in Boston. More information and registration.