How AI and FHIR can help reduce sepsis mortality rates

Although 80-85% of sepsis cases occur within the first 48 hours of admission, they have a lower mortality (5-10%) compared to 15-20% of cases that occur later and have a higher mortality ( 15-30%).

To better – and earlier – identify sepsis cases that were not present on admission, an end-to-end early sepsis prediction and response workflow was created in the inpatient setting at a large safety net hospital. First, a machine learning model was built to predict in real time the risk of a patient becoming septic.

The model was then embedded in clinical workflows via FHIR APIs to make it usable at the point of care. The model accesses the EHR every 15 minutes and alerts healthcare providers when the risk exceeds a certain threshold, which can be tailored to the local population.

Finally, an EHR-integrated decision support app, ISLET, was added to allow clinicians to easily view and understand the model output to improve feasibility. Predicting, alerting, visualizing the root causes and taking action complete the workflow. This entire workflow has been performed every 15 minutes for thousands of patients over the past year.

Yusuf Tamer is chief data and applied scientist at the Parkland Center for Clinical Innovation. He will tell this story in detail HIMSS24 in an educational session titled “Closing the Loop in Sepsis Prediction With ML and ISLET Visualization.”

We interviewed Tamer to get a sneak peek into the session ahead of the big show next month in Orlando.

Q. What is the overarching focus of your session? Why is this important to healthcare IT leaders in hospitals and healthcare systems today?

A. Sepsis is a serious condition caused by an infection and can lead to multiple organ failure. It is a medical emergency that requires prompt identification and treatment. The primary focus of my session is to discuss the role of artificial intelligence in the early prediction of sepsis within hospital settings.

AI systems in healthcare are increasingly complementing healthcare providers by providing them with reasons for distrust. These suspicions are responded to if the providers have confidence in the reasons given to them. This trust is built on two important pillars: timeliness and explainability.

Timeliness is crucial in the detection of sepsis. The sooner sepsis is diagnosed, the greater the patient’s chance of recovery. If an AI system identifies sepsis and alerts the healthcare provider after they have already started treatment, it reduces the value of the system. It could disrupt clinical workflow and undermine confidence in the AI ​​system. Therefore, the AI ​​system must be designed in such a way that it provides timely warnings that can actually help in the treatment process.

Explainability is another crucial aspect. In a patient care environment, every action taken by a healthcare provider is subject to audits. Although AI systems are not the final decision makers, they can significantly influence decision making.

Therefore, the decisions made by AI systems or machine learning models must be explainable. This transparency is crucial for audit purposes and ensures accountability in AI-enabled healthcare.

Further, the explainability of AI systems is not only important for auditing, but also for building trust with healthcare providers. If the AI ​​system can provide clear, understandable reasons for its predictions, healthcare providers are more likely to trust the recommendations and act on them.

The session will provide valuable insights into how AI can improve the early prediction of sepsis, highlighting the importance of timeliness and explainability in building trust and improving patient outcomes.

This topic is of paramount importance to healthcare IT leaders because it touches on the intersection of technology and patient care, highlighting how AI can be leveraged to improve healthcare.

Q. What is one of the key learning points you would like session participants to take away with? And how is this critical to today’s healthcare?

A. The most important thing I want to convey to the participants of this session is that machine learning models do not have to be ‘black boxes’. While performance is a critical factor for these models, an explainable model that providers trust is more likely to be used if it delivers comparable performance. This is a crucial concept in the context of today’s healthcare and healthcare IT.

Machine learning models are often seen as complex and opaque, making them difficult for healthcare providers to trust and use. However, it is important to understand that these models can be designed to be transparent and explainable.

A model that provides clear, understandable reasons for its predictions can increase trust among healthcare providers, leading to greater use, even if performance is only comparable to less transparent models.

Furthermore, visual representation of data can significantly increase the value of these models. A picture indeed says more than a thousand words. A graph that illustrates how a patient’s vitals or lab values ​​have changed over time can provide more value than a simple numerical output.

It can help healthcare providers better understand the patient’s condition and the model’s predictions, leading to more informed decision-making.

In this session, we’ll discuss how we created an image report page about our machine learning model and how we integrated it into the EHR. This integration allows healthcare providers to access and understand the model’s predictions directly in the patient’s EHR, increasing the model’s usefulness.

Furthermore, we will explore how Fast Healthcare Interoperability Resources APIs open up new, fast, and interactive ways to visualize machine learning insights. These APIs enable seamless integration of machine learning models with existing healthcare IT systems, enabling real-time, interactive visualization of model predictions.

The session aims to demystify machine learning models in healthcare and highlight the importance of explainability and visualization in building trust and improving the usability of these models. This insight is critical for healthcare IT leaders as they navigate the rapidly evolving healthcare AI landscape.

Q. What is another lesson you would like session participants to take away with? And how is this critical to healthcare and/or healthcare IT today?

A. It is the importance of continuous feedback from active users, in this case healthcare providers, in increasing the value of AI systems in healthcare. This is a critical aspect of healthcare and healthcare IT today.

AI systems are not standalone entities; they are part of a larger ecosystem that includes healthcare providers, patients and other stakeholders. Therefore, the development and refinement of these systems must be a collaborative process.

When healthcare providers are involved in the development of machine learning solutions, they gain a better understanding of how these systems work. This understanding promotes trust, which in turn improves their use of the tool in their decision-making process.

Additionally, healthcare providers often face alert fatigue due to the large number of alerts they receive from different systems. This can lead to important warnings being missed, potentially impacting patient care.

Therefore, it is critical to get providers’ opinions on what to alert and when to wait before alerting. This feedback can help design more effective alert systems, reduce alert fatigue, and ultimately improve patient care.

Additionally, ongoing feedback from healthcare providers can help identify areas for improvement for the AI ​​system. Providers, who are the end users of these systems, can provide valuable insights into the system’s performance, usability, and relevance in the clinical setting. This feedback can be used to refine the system, make it more effective and user-friendly.

The session aims to highlight the importance of user feedback in the development and refinement of AI systems in healthcare. This insight is essential for healthcare IT leaders as they strive to integrate AI into healthcare in a way that is effective, easy to use, and beneficial to patient care.

This collaborative approach to AI development not only increases the value of the AI ​​system, but also promotes trust and understanding among its users.

The session, “Closing the Loop in Sepsis Prediction With ML and ISLET Visualization,” is scheduled for March 12, noon to 1 p.m., in room W304A at HIMSS24 in Orlando. More information and registration.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.