Implementing AI? Heathcare organizations ‘must have a specific measurable goal’
Most hospitals and healthcare systems are still evaluating whether artificial intelligence is right for their organization. However, many are looking to extend existing AI and machine learning implementations.
Healthcare provider organizations that fall into one of these categories are targeted for an educational session at an upcoming HIMSS forum, a session that will provide a strategic guide to AI adoption in healthcare.
Tom Hallisey is head of digital health strategy and board member at Columbia Memorial Health. He will speak about healthcare system AI work at the HIMSS AI in Healthcare Forum 2023, December 14-15 in San Diego. His panel session, which he moderates, is titled: “A strategic guide to integrating AI into your healthcare roadmap.”
Panelists include Albert Marinez, Chief Analytics Officer at the Cleveland Clinic; Tatyana Fedotova, director of global data, platforms and partnerships, at Johnson & Johnson; and Christopher Larkin, chief technology officer at Concord Technologies.
Panelists will reveal the most critical questions to ask and decisions to make at every stage of the AI journey, from build vs. buy and tool selection to ensuring AI investments are focused on maximum impact, and much more .
We spoke with Hallisey to get a preview of the session and a better idea of what it will take to transition to AI in healthcare.
Q. What’s an example of an important question to ask during the early stages of a healthcare AI journey? And why is it important?
A. As is usually the case, the most important question to ask is what problem we are trying to solve. Generative AI tools can deliver so many new and potentially valuable results, but we need to have a specific measurable goal in mind to demonstrate the value and scale the work.
Once we have a pilot idea, the AI-specific question becomes whether we should build, buy, or modify an existing large language model tool with internal data and rules. This decision will be based on internal capabilities, privacy and security considerations, scalability, and data/bias considerations.
The projects most likely to fail are the ones that find an application for really cool new tools, and there are no tools cooler than AI right now. It takes a careful, considered approach to what type of value we are looking for and what types of tools we can rely on and what resources we can provide to implement successfully.
Q. What’s one way to ensure AI investments are focused on maximum impact?
A. To ensure the best impact from AI investments, create a committee to collect and prioritize ideas, guide resource selection, review pilot results, and help scale. Make sure you include a diverse group on the committee.
The business units and end users know best where problems and inefficiencies lie and can direct planning for the best impact; their buy-in will be essential to achieving success. If a project is deemed too risky for a particular area because this technology is still very new and not yet well understood, it is unlikely to succeed. It’s better to start somewhere else and educate staff about the possibilities and potential problems with AI tools.
However, it is also important to have top leaders in the selection process who ensure that decisions are based on current leading organizational strategies and key concerns. There are many use cases for AI tools that can add some value but may require significant resources in tackling the most pressing problems of the day.
We also need to select projects and tools that are mature enough for proper integration into existing or updated workflows. One-off or niche projects won’t yield big results. Look at ChatGPT’s web usage, even that has declined since its peak. The tools must be integrated into the business to change the workflow and create real value.
Q. What’s one tip to ensure long-term success with an AI investment in healthcare?
A. AI tools are often so new that long-term success can be difficult. As AI LLMs or clinical algorithms continue to be used, data is updated, demographics change, and results may vary.
A recent study even pointed out how algorithms can build their own obsolescence, as interventions change the underlying data on which they are built and thus their ability to predict.
Plans should be put in place to continuously measure the results of each AI tool and intervention. What works in one location or in one population may not work in another, and as I noted, what works today may not work next year. New AI regulations from the White House Executive Order seek to address these concerns, as do ONC’s recent algorithm-proposed rules addressing integrated clinical decision support in the EHR.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: email@example.com
Healthcare IT News is a HIMSS Media publication.