HITRUST announced this week the launch of its new HITRUST AI Assurance program, designed to help healthcare organizations develop strategies for the safe and sustainable use of artificial intelligence models.
The standards and certification organization says it will continue to develop risk management guidelines for AI systems.
WHY IT’S IMPORTANT
The HITRUST AI Assurance Program is designed to prioritize risk management as a fundamental consideration in the newly updated version 11.2 of the HITRUST CSFsays the group, and is intended to enable organizations deploying AI in various use cases to work more proactively and efficiently with their AI service providers to discuss approaches to shared risks.
“The resulting clarity of shared risks and responsibilities will enable organizations to rely on common information protection controls already available from internal shared IT services and external third-party organizations, including AI technology platform service providers and AI-enabled application providers and other managed AI services,” said HITRUST.
The group describes the program as the first of its kind focused on achieving and communicating cybersecurity control guarantees for generative AI and other emerging algorithm applications.
HITRUST’s strategy document, “A Path to Trustworthy AI,” is available for download.
While AI models from cloud service providers and others enable healthcare organizations to scale models across use cases and specific needs, the opacity of deep neural networks creates unique privacy and security challenges, HITRUST officials note. Healthcare organizations must understand their responsibilities around patient data and ensure they provide reliable risk assurances to their providers.
The goal of the program is to provide a “common, reliable, and proven approach to security assurance” that enables healthcare organizations to understand the risks associated with implementing AI models and “confidently demonstrate their compliance with AI risk management principles.” . “The same transparency, consistency, accuracy and quality are present across all HITRUST Assurance reports,” officials say.
HITRUST says it is working with Microsoft Azure OpenAI Service to maintain the CSF and more quickly map the CSF to new regulations, data protection laws and standards.
THE BIGGER TREND
Recent research has shown that generative AI is expected to become a $22 billion part of the healthcare industry over the next decade.
As healthcare systems race to deploy generative and other AI algorithms, they are seeking to transform their operations and increase productivity across a variety of clinical and operational use cases. However, HITRUST notes that “every new disruptive technology also inherently brings new risks, and generative AI is no different.”
Responsible use is critical – and most healthcare organizations are taking a careful and thoughtful approach to exploring generative AI applications.
But there are always risks, especially when it comes to cybersecurity, where AI is a double-edged sword.
ON THE RECORD
“Risk management, security and assurance for AI systems requires that organizations contributing to the system understand the risks across the system and agree how they will collectively secure the system,” Robert Booker, chief strategy officer at HITRUST, said in a statement .
“Trusted AI requires an understanding of how controls are implemented and shared by all parties, as well as a practical, scalable, recognized and proven approach for an AI system to adopt the right controls from its service providers,” he added. “We are building AI Assurances on a proven system that provides the scalability needed and inspires the trust of all trusting parties, including regulators, who care about a trustworthy foundation for AI implementations.”
“AI has enormous social potential, and the cyber risks that security leaders deal with every day extend to AI,” said Omar Khawaja, Field CISO of Databricks and HITRUST board member. “Objective security assurance approaches such as the HITRUST CSF and HITRUST Certification reports assess the required security foundation that should underlie AI implementations.”
Mike Miliard is Editor-in-Chief of Healthcare IT News
Email the author: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.