National Academy of Medicine establishes code of conduct for AI in healthcare
The National Academy of Medicine has released a landscape overview, code principles, and commitments stating that accurate, safe, reliable, and ethical AI transformation in healthcare and biomedical science is achievable.
Based on the Leadership Consortium’s Learning Health System Core Principles, an initiative the academy has led since 2006, the organization said its new design framework promotes responsible behavior in the development, use and ongoing assessment of AI.
Its core principles require inclusive collaboration, continuous safety assessment, efficiency and environmental protection.
WHY IT MATTERS
The complete commentarywhich includes a landscape overview and a “Concept of a Code of Conduct Framework: Code Principles and Code Commitments,” was developed through the academy’s AI Code of Conduct initiative, under a steering committee of expert stakeholders, according to an announcement.
The code principles and proposed code commitments “reflect simple guideposts to guide and measure behavior in a complex system and provide a starting point for real-time decision making and detailed implementation plans to advance the responsible use of AI,” the National Academy of Medicine said.
The Academy’s Artificial Intelligence Code of Conduct Initiative, launched in January 2023, has involved many stakeholders – mentioned in the acknowledgments – in co-creating the new draft framework.
“The promise of AI technologies to transform health and healthcare is enormous, but there are concerns that their inappropriate use could have harmful consequences,” Victor Dzau, president of the academy, said in a statement.
“There is an urgent need to establish principles, guidelines and safeguards for the use of AI in healthcare,” he added.
Starting with a comprehensive review of the existing literature around AI guidelines, frameworks, and principles—some sixty publications—the editors identified three areas of inconsistency: inclusive collaboration, ongoing safety assessment, and efficiency or environmental protection.
“These issues are of particular importance because they highlight the need for clear, intentional action between and among diverse stakeholders, including the interstitium, or connective tissue that unites a system in pursuit of a shared vision,” they wrote.
Their commentary also identifies additional risks of using AI in healthcare, including misdiagnosis, overuse of resources, breaches of privacy, and staff displacement or “oversight based on an over-reliance on AI.”
The ten code principles and six code commitments in the framework ensure that best AI practices maximize human health while minimizing potential risks, the academy said, noting that they serve as “basic guidelines” to support organizational improvements at scale.
“Health and healthcare organizations that align their visions and activities with these ten principles will help drive the system-wide alignment, performance, and continuous improvement that are so important in light of today’s challenges and opportunities,” the academy said.
“This new framework puts us on a path toward the safe, effective and ethical use of AI while realizing its transformative potential in healthcare and medicine,” added Michael McGinnis, executive officer of the National Academy of Medicine .
Peter Lee, president of Microsoft Research and a member of the academy’s steering committee, noted that the academy is inviting public commentary (through May 1) to refine the framework and accelerate AI integration in healthcare.
“Such progress is critical in overcoming the barriers we face in America’s healthcare system today and ensuring a healthier future for all,” said Lee.
In addition to stakeholder input, the academy said it would bring together critical contributors in working groups and test the framework in case studies. The academy will also consult with individuals, patient advocates, healthcare systems, product development partners and key stakeholders – including government agencies – before releasing a final code of conduct for AI in healthcare.
THE BIG TREND
Last year, the Coalition for Health AI developed an AI Blueprint that took a patient-centered approach to addressing barriers to trust and other challenges of AI to help shape the academy’s AI Code of Conduct.
It was built on the The White House AI Bill of Rights and the National Institute of Standards and Technology AI risk management framework.
“Transparency and trust in AI tools that will influence medical decisions are absolutely paramount for patients and physicians,” said Dr. Brian Anderson, head of digital health at MITER, co-founder of CHAI and now CEO, in announcing the blueprint.
While most healthcare leaders agree that trust is the key driver for improving healthcare and patient outcomes with AI, how healthcare systems should put ethical AI into practice is an area full of unanswered questions .
“We don’t have a scalable plan yet as a nation in terms of how we’re going to support critical access hospitals or (federally qualified health centers) or healthcare systems that are under-resourced, that are not able to sustain themselves. set up these governance committees or these very nice dashboards that will monitor the deviations and performance of models,” he said Healthcare IT news last month.
ON THE RECORD
“The new draft Code of Conduct is an important step toward creating a path forward to safely reap the benefits of improved health outcomes and medical breakthroughs possible through responsible use of AI,” Dzau said in the announcement. the National Academy of Medicine.
Andrea Fox is editor-in-chief of Healthcare IT News.
Email: afox@himss.org
Healthcare IT News is a HIMSS Media publication.