How responsible AI can improve patient outcomes

Hospitals and healthcare systems must understand how to balance the many new opportunities that artificial intelligence offers for improving patient outcomes and the need to deliver AI products responsibly.

In addition to data privacy, there are also ethical and regulatory considerations. And there are principles of what’s known as “responsible AI” that are actively being used in products today.

Lisa Jarrett, senior director, AI and data platform, at PointClickCare, will discuss all these issues during an educational session at the HIMSS24 Global Conference and Exhibition titled “Responsible AI to Improve Patient Outcomes.”

Transparency and honesty

With the extraordinary promise of AI comes an equally enormous need to use AI in ways that enhance the work of physicians and healthcare providers with transparency and fairness, Jarrett said.

“As we weigh the options for using AI, we must also evaluate and design the ethics from the earliest plan through customer use and ongoing management and measurement,” she explained. “In healthcare, we must integrate the core values ​​of responsible AI and go even further by considering the diverse ecosystem of patients, care environments, providers and physicians who will directly use or be impacted by AI features.

“To ensure successful use and positive impact, an active collaboration with clinicians and users to learn their questions and feedback on how AI impacts their daily activities is critical,” she continued. “Healthcare IT leaders must understand how responsible AI principles play a role in the user ecosystem and healthcare environments to ensure critical questions are answered from the start and throughout the lifecycle to support effective adoption. ”

Legislation and regulation for AI is emerging, and industry groups are developing and sharing principles for responsible AI in clinical decision support.

Required responsible AI practices

“Different perspectives exist among physicians, delivery environments, etc., on what the required responsible AI practices should be,” Jarrett said. “The recent HHS ONC HT1 algorithm transparency provision provides more detailed guidance for AI use in healthcare. HHS outlined a framework called FAVES (Fairness, Appropriateness, Validity, Effectiveness and Safety).

“This is a practical and meaningful framework to ensure a consistent core set of information about algorithms used to support their decision-making,” she continued. “The approach PointClickCare takes takes on top of these principles and engages early and often with physicians who will be users to integrate their questions and concerns into the product.”

This is critical to ensuring predictions are received positively and to understanding how to build customer trust, she added.

For example, to develop a predictive return to hospital algorithm active in both Pacman and Performance Insights, users ranging from case managers, nurses, and medical directors reviewed content and established a human baseline to compare algorithmic predictions and to derive accuracy data,” Jarrett noted.

“There is no one size fits all. Unique considerations apply to primary and edge use cases, different personas have different perspectives and concerns,” she continued. “Responsible AI values ​​provide a starting point for designing, training and deploying algorithms. Product teams should start with a framework and then dive deeper and adapt based on the use case and users.”

Data security and privacy

“Explainability” and transparency of data used in algorithm development and evaluation is required, in addition to data security and privacy to ensure user trust in hospitals and healthcare systems, she added.

An important takeaway for participants in Jarrett’s session is that it is as important for IT leaders to evaluate responsible AI on AI-driven or enabled systems as it is for the quality of the system itself, she said.

“On behalf of their users, whether physicians or healthcare providers, they should look for and ask questions about the explainability of algorithms, how they were developed, and how the product integrates feedback and adaptation into ongoing monitoring and management,” she says. explained. “These questions and the availability of information about responsible AI for the product to answer them is critical to evaluate, especially as hospitals and healthcare systems expand their portfolio of AI-enabled tools.

“Healthcare IT leaders are critical to ensuring a responsible AI supply chain, which should be considered as important as evidence of a trusted security software supply chain,” she continued. “Understanding and acceptance at the user level to know what is behind the scenes is a prerequisite for effective adoption and use. Healthcare IT leaders know their users, their use cases, and the thresholds their users are willing or unwilling to accept for trust .”

Transformational capabilities with AI

On another front, physicians are fundamental to both identifying transformational opportunities with AI and helping raise the bar on responsible AI levels in clinical decision support, Jarrett said on other topics in her HIMSS24 session.

“PointClickCare’s experience with predictive algorithms is that there is a wide range of acceptance or skepticism within the same persona and it is imperative to record enough volume to develop a solid baseline, then revisit it and to adapt based on changes,” she said.

“This proactive product developer process is part of what healthcare IT leaders should look for when evaluating AI-based solutions,” she continued. “Only with clinical collaboration and direct involvement in the development of AI products can we both reach for the stars and ensure there is an unobstructed view in the telescope.”

The session, “Responsible AI to Enhance Patient Outcomes” is scheduled for March 12 from 10:30 a.m. to 11:30 a.m. in room W208C at HIMSS24 in Orlando. More information and registration.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.