What it takes to engage clinical workers in AI

BOSTON – When asked about the development of artificial intelligence and when intelligent AI would become an integral part of healthcare, Dr. Patrick Thomas, director of digital innovation in pediatric surgery at the University of Nebraska Medical Center School of Medicine, said clinical training needs to be revamped.

“We need to prepare clinical students for the real world they’re going to encounter,” he said Thursday at the HIMSS AI in Healthcare Forum.

Hosting a panel on how to support healthcare workers to trust and implement AI, highlighting the importance of governance, keeping people informed, and sharing responsibility for maintaining the quality of clinical AI with developers, Thomas asked his fellow panelists how they are dealing with clinicians’ skepticism and doubts.

Thomas was joined on the call by Dr. Sonya Makhni, medical director of the Mayo Clinic Platform, Dr. Peter Bonis, medical director of Wolters Kluwer Health, and Dr. Antoine Keller of Ochsner Lafayette General Hospital.

Expansion of clinical resources

Using large language models to solve the heavy cognitive burdens experienced by clinicians is still fraught with complications ranging from bias and data hallucinations to cost.

Bonis noted that application developers will likely have to pay the costs of developing systems, in addition to the costs of innovating the underlying core models.

Keller added that the health care industry has more information than the clinical workforce has to process.

“We don’t have enough manpower to make accurate clinical decisions in a timely manner,” he said. As clinicians focus on risk innovation — building safeguards to address concerns about AI use — it’s essential to give clinicians the comfort level to adopt.

Keller, a cardiac surgeon, described how Oschner Health in Louisiana is offering an AI-enhanced tool called Heart Sense to community health partners to drive interventions.

By diagnosing underserved communities with low-cost technology, the AI ​​tool “significantly expands the workforce,” he said.

Improving access in underserved communities

The use of the heart screening tool not only improves the use of Oschner, but also focuses attention on patients who need it most.

Thomas wondered what it means for AI to impact care when a healthcare organization doesn’t have data scientists on staff, how Oschner’s community health partners understand the AI ​​tool, and how it’s being used.

There is a lot of hesitation that the communities they serve benefit from hand-holding, Keller said.

“You have to be present and aware of the barriers and issues people experience in using the technology,” he explains.

But those who get the technology in areas where there is a shortage of medical providers are grateful for it, he said.

One of the key diagnostic criteria – that patients have a heart murmur – is required before a patient can undergo aortic valve replacement. Using the AI-driven screening tool, the health system found that 25% of people over the age of 60 in its communities will have a pathogenic murmur – which can be cured with surgical intervention.

“The prevalence data shows that a large portion of the population is undiagnosed,” Keller said.

By using artificial intelligence (AI), we can identify which patients are at risk and which can still be cured. This can be a great advantage in treating patients before they develop irreversible dysfunctions.

But acceptance depends on a workforce that is well-trained, he said.

“Visual – with something that is concrete,” and easy to understand, even with little education, he said.

Commitment and shared responsibility

“The stakes are incredibly high for clinical care,” Bonis acknowledged, noting that a guiding principle in developing trustworthy AI under all its nuances is always to involve a human as a stakeholder in clinical decisions.

While frontline clinical professionals may not always be interested in the “sausage making” that is AI, “caution from a vendor perspective is key,” he said.

The question for Makhni is how to bundle expertise across the entire lifecycle, she said.

The Mayo Clinic Platform is working directly with AI developers and The Mayo Clinic to look at how clinical AI can be deployed in a way that is also user-centric – “and then that information can be transparently communicated to empower the end user.”

Such a multidisciplinary assessment can determine whether an AI developer has attempted to assess bias, for example. That information can then be passed on to clinicians in a way they can understand.

“We meet (developers) where they are in their journey,” but the goal is to promote the principles of safety, fairness and accuracy, she said.

It is critical to consider the digital divide and ask the clinical workforce what their concerns are. This is essential to delivering safe AI systems and a burden that cannot rest solely on the shoulders of the users.

“Sometimes it should also fall on the solution developer,” she said. “We need to have a shared responsibility.”

While healthcare won’t solve all of AI’s challenges in the short term, “we can take a measured approach,” she added.

Andrea Fox is Editor-in-Chief of Healthcare IT News.
Email address: afox@himss.org

Healthcare IT News is a publication of HIMSS Media.