The future of AI is what healthcare organizations make of it

Organizational preparedness, prevailing regulations, and user understanding are some of the factors that influence the success of deploying AI in healthcare systems.

During the HIMSS24 APAC panel session, “AI Horizons: Exploring the Future of Innovations,” Professor In-Young Choi, Deputy CIO of the Catholic Medical Center (CMC), Dr. Shankar Sridharan, Chief Clinical Officer at Great Ormond Street Hospital, and Dr. Ngai Tseung Cheung, the Head of Information Technology and Health Informatics at Hospital Authority, Hong Kong (HAHK), shared results, lessons and anticipation on the implementation of AI in healthcare. Professor Kee Yuan Ngiam, head of the National University Health System’s AI office, moderated the panel.

The reality of AI implementation

AI in healthcare is touted as an all-in-one solution for digital needs, but Dr. Cheung thinks otherwise.

“AI is an amazing technology and sometimes seems magical, but in reality it is not. AI is a tool no different from any other technology we can deploy.”

He shared questions to consider when applying AI: “How is AI deployed into a workflow at scale so that it can deliver positive results? What is the impact (of AI on the organization)? How does it impact give the users something useful?”

Dr. Shankar from Great Ormond Street Hospital shared these preliminary perspectives. “The medicinal asset (for implementing AI) is enthusiasm, which can be quite contagious. But we need to address its purpose… When we do an assessment of the safety, security and technology benefits, there is is there a clinical, operational or patient experience benefit?

However, Professor Choi had a different opinion, given the strict regulations she works with.

“We have no government control over EPD contracts. Each hospital has its own (exclusive) data. It’s not easy to share data with other hospitals, and it’s not easy for AI companies to access hospital data. AI) algorithm uses cloud computing, it will be outside Korea, and the law does not allow our medical data to enter foreign countries,” she explained.

Overcoming barriers to adoption

Given its layered nature, the use of AI instruments may require additional reassurance. Dr. Cheung shared HAHK’s proactive stance in conducting internal validations of AI tools.

Dr. Shankar, on the other hand, provided reassurance of data security through vetted partnerships. “We have a trusted research department and commercial relationships with the pharmaceutical industry and the industry that keeps (data) flowing. We then communicate with our CEO that our data is safe and that the (AI) tool is useful and valuable without losing data due to risks. ”

Dr. Cheung also noted that regular demonstrations can assuage pessimism among health care workers. “The good thing is that we have 43 hospitals. If it (an AI test) fails, that’s fine. Then we can show that we tried… If we can show them that it works and the hesitant people (staff and doctors) switch (to optimism), that will be good… We have to show them that the AI is in order (for use).”

Realizing future AI potential

Despite adoption barriers, CMC has software development plans that can help pathologists deliver treatments for lung cancer.

“One patient can have many cancer subtypes. For example, an individual can have a 20% capillary subtype. Humans can’t count that (figure), but AI can. It can help doctors provide better treatment to the patients,” says Prof .Choi explained.

“We’re focusing on one specific disease for now. But after generative AI comes along, maybe AI can see (relevant health) details in a more comprehensive way.”

Meanwhile, Dr. Cheung the human impact of future AI.

“Current AI is not conscious. It is not at the level of a human being. However, there is a concept of artificial general intelligence that is being touted as superhuman. If that (technology) is better than humans in every way, there will be a huge change for medicine and humanity.”

Dr. Shankar believes there is still a long way to go to realize AI’s full potential. “We’re a bit like cavemen (with AI). Our limitations are our own, and so are our use cases.”

Related Post