Editor’s note: This is part two of our interview with Craig Kwiatkowski. To read part one, Click here.
Cedars-Sinai, the prominent health care system in California, has deployed or is in the works for a variety of artificial intelligence programs. It is at the forefront of AI in healthcare.
Craig Kwiatkowski is Chief Information Officer at Cedars-Sinai and leads the teams putting together the AI designed to improve care and help patients and caregivers.
In today’s interview, we talk to Kwiatkowski, who has a PhD in pharmacy, about some of the AI tools being used in healthcare. He describes how he measures the success of AI-enabled initiatives, and how AI can help advance health equity. Specifically, he shows how Cedars Sinai Connect, an AI-powered app for primary care, addresses AI bias and provides training on datasets that reflect diverse populations.
Q. What AI tools are you using or implementing at Cedars Sinai that seem particularly promising?
A. Our focus on tools is really on tools that can help reduce friction, improve efficiency and simplify things, frankly, to help our healthcare providers, doctors and patients. There is no shortage of opportunities in the generative AI category.
One thing I’m excited about is ambient documentation, also known as ambient scribe or virtual scribe. That technology seems promising and I’m quite optimistic about it. We’ve been testing these tools for a while. The feedback has been solid so far.
Many physicians are finding that the environmental tools help with the cognitive load and administrative burden of writing notes, and we’re starting to see that as we get these tools out into the open. We also notice that it doesn’t always save time in all cases, but it does make it easier for them and counteracts the burnout factor. And it allows them to focus more on the patient and less on the computer, which is obviously important.
One of the doctors I spoke to described Ambient as a very good medical student. It doesn’t make everything perfect, but it’s pretty good and it almost makes them a writer, so to speak.
But we have also come to realize that this does not apply to everyone. Some doctors have a very efficient workflow that uses today’s tools, templates, sentences, and a lot of muscle memory to click through their notes and gather the information they need. And that way, it’s more efficient than having them actually read through the prose and all the language that might appear in an AI-generated note.
We see these themes around some of the other tools we tested. Such as the draft-in-basket options. The AI-generated content is very good and comprehensive, but tends to be a bit more comprehensive.
The other technology that we’re excited about and starting to look into is virtual sitters and virtual nursing, using some AI and visualization capabilities to provide alerts and more proactive management as these ratios begin to change. And that really seems to have great potential to improve efficiency and care and help with staffing.
Frankly, we are also excited about the planned and ongoing work around patient access and further expanding our virtual tools, once again asking ourselves how we can make it easier not only for providers and staff, but also for patients to to plan their schedule. themselves and receive care more easily?
Q. How do you measure the success of AI initiatives?
A. We treat it the same way or consistently when we measure any technology or new solution. Perhaps it’s good to remind ourselves that we can continue to lean on many of the more proven ways we’ve deployed and used to measure technology over the years.
And that means we want to develop KPIs and metrics, and then we measure the performance of the initiative against those criteria. And those criteria are usually tied to the problem we’re trying to solve, and hopefully the ROI we expect to get from the solution.
And so those outcome metrics should be pretty clear. In the example I mentioned in the case of access, we would probably look at the next available appointment, or if we want to expand the digital scheduling capabilities, it’s a simple numerator denominator and a percentage of where we are versus where we want to be are. So those things are usually pretty obvious.
What gets a little bit more challenging sometimes is that we don’t always have a baseline, we don’t always have the baseline data, or it’s a little bit harder to measure. Where possible, we will try to quickly gather these baselines or make some educated guesses for extrapolations as a measure for the new tool.
In the case of environmental documentation, it is not always easy to quantify or measure physician well-being or burnout. Turnover is certainly one way, but there is a sliding scale of burnout that may never be reported or lead to turnover. And so it’s trying to do some kind of measurement, if you’re not already doing that.
Surveys are one way to do that, happiness scales, intention to stay, and so on. But there are also other surrogate measures and notes we can look at, namely aspects of note writing: pajama time, time away from work, total time and documentation. So there are ways to get the information and measure that value, but in some cases it requires a little more intentionality, and maybe some creativity that we haven’t always been proactive about.
Q. How can AI help advance equity in healthcare?
A. There are a number of ways AI can help. It can analyze vast amounts of health data to identify disparities in access and outcomes, and it can help personalize care. AI automation can make systems more efficient and hopefully improve access and availability.
A great example of that is something we’ve done here at Ceder’s Sinai called CS Connect, a virtual care option where physicians are available 24/7 for emergency care, same-day care, and regular primary care. This helps alleviate capacity issues within our physical locations. And it gives people access where and when they need care.
There is a guided intake that dynamically responds to the questions and answers that the patient asks during the intake process. They can see information about what their possible diagnosis might be, and then they have the choice of whether or not to see a doctor.
We recently expanded that offering to children ages three and older, and to Spanish speakers, once again expanding the pool of people who can use these tools to receive care.
Q. How does Cedars Sinai Connect address AI biases and train AI on datasets that reflect diverse populations?
A. We know that the effectiveness of these large language models and AI tools is highly dependent on the quality and diversity of the data on which they are trained. We know that the more diverse demographic and geographic data we include, the better we can control certain biases. If populations are underrepresented, we can make a biased prediction.
So we know this is important, as is the amount of data needed to train and monitor these tools for CS Connect. The AI technology was developed by a company called K Health from Israel, and we kind of built the app experience together with them. Again, back to the question of building versus buying.
We saw a gap in the market and decided to start building. But the AI was initially trained on patient populations in Israel, and those populations are clearly very different from the people within our community here in LA, and then throughout California, where the tool is available.
So if we recognize that there are mathematical methods and approaches to adapt the data sets and the training to ensure that our populations are taken into account to control these types of biases, it is also the growing appreciation that data and training is local, and it should be so.
And we need to keep that in mind when building these tools, along with ongoing training and monitoring of the models as they are deployed. As we have implemented CS Connect, we have had approximately 10,000 patients go through the tool and approximately 15,000 visits. All these patients and visits will aid in the continued training and improvement of the models, which will hopefully continue to improve accuracy and maintain the safety and robustness of the solution over time.
Editor’s Note: This is the eighth in a series of articles from top voices in healthcare IT discussing the use of artificial intelligence in healthcare. To read the first part, about Dr. John Halamka from the Mayo Clinic: Click here. To complete the second interview with Dr. To read Aalpen Patel at Geisinger, click here. To read the third, with Helen Waters of Meditech, click here. To read the fourth, with Epic’s Sumit Rana, click here. To read the fifth, with Dr. General Brigham’s Rebecca G. Mishuris, click here. To read the sixth, with Dr. Melek Somai of the Froedtert & Medical College of Wisconsin Health Network, click here. And to read the seventh, with Dr. Brian Hasselfeld of Johns Hopkins Medicine, click here.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.