CAIOs must understand policy and business strategy, in addition to healthcare and IT

Editor’s note: This is part two of a two-part interview. To read part one, click here.

Dennis Chornenky, chief artificial intelligence consultant at UC Davis Health, knows what it takes to be a chief AI officer in healthcare—he’s been one twice.

That’s why we sat down with him for this two-part interview – to share the lessons he’s learned about this new C-suite role in healthcare.

Today, Chornenky, who has two decades of IT leadership experience and is also CEO of Domelabs AI, discusses where and how UC Davis Health is making the most use of artificial intelligence.

He describes – and tips for – some of the many AI projects he is working on in California’s healthcare system other executives who may want to become Chief AI Officer for a hospital or healthcare system.

Q. Please talk at a high level about where and how UC Davis Health is using artificial intelligence?

A. I am fortunate to have the opportunity to work with UC Davis Health and the great leadership there. I think there’s a great vision, very innovative, great doctors and staff, just a great team.

We track more than 80 AI applications in the healthcare system, and the scope is quite diverse. A lot of this also comes from individual research grants from the NIH and others that some of our researchers and clinicians have been working on, some really interesting applications.

And it has a variety of applications in healthcare delivery, patient engagement, patient management, operations and administration. We have also been looking at that side a lot more lately. On the administrative side. We recently held a UC-wide conference at UCLA focused on how we can think about using AI more on the administrative side of all the different UC campuses and the academic medical centers in UC.

I don’t really want to comment on specific vendors, but it’s been great to see pretty rapid adoption of AI. I think there is still a long way to go.

There are so many possibilities. As I mentioned in part one of our interview, AI is evolving very quickly. A big part of the role now is thinking about how we position ourselves for things that will be really relevant and powerful, in the next year or two.

Sam Altman, CEO of OpenAI, the creator of ChatGPT, recently said that he thinks we’ll have something AGI-like or something similar to AGI (artificial general intelligence) within a thousand days. So I think to the extent that something can mimic these capabilities, whether we want to think of it as AGI or not, it will be very powerful. (Editor’s note: AGI is software with intelligence comparable to that of a human and the ability to teach itself.)

Cognition that is many times more powerful than what we have, even in the most advanced models released to date. So how organizations think about positioning for that is a very important dimension, both on the governance side and on the adoption side.

Q. Describe and discuss more specifically one specific AI project that you are proud of that is working well for UC Davis Health and some of the results you are seeing. How did you supervise this project?

A. I do not oversee AI projects individually. I’m a few steps outside of that and really looking more at the strategic levels of governance, ensuring security and a broader direction for innovation and adoption. But we certainly, as I said, follow different projects and encourage them and help them in different ways through different means.

One that I can mention that is very good is the adoption of a technology that we have used to help us identify and prioritize stroke patients. This helped a lot. The vendor we work with also helps share some of that information with other academic medical centers and healthcare systems, creating greater efficiencies and better patient pathways for patients who may also be in some other organizations.

And it has really improved patient outcomes in the space. The ability to identify a stroke more quickly makes a huge difference in what the outcome is for the patient. So that’s a project we’re very proud of.

Q. What are some tips you would give to other IT managers looking to become Chief AI Officer for a hospital or healthcare system?

A. That’s a very interesting question, and I get a lot from colleagues and people who have watched my journey and are interested in doing something similar. A lot of people have really great backgrounds, and so they’re thinking about how they can possibly advance in that space. I’ll say again, at a high level, as I mentioned in part one, I think you really need to think about the different dimensions of skills that are going to be needed to be successful in this role in the future.

So understand policy, business strategy, technology, what it can and cannot do, and have domain expertise for whatever domain you are entering. If you feel like you have a few, but maybe a little lacking in some of the other areas, I would definitely encourage people to dig deeper into those other areas and broaden their options in general.

Because, again, AI is a multi-dimensional technology, and multi-dimensional capabilities require, I think, multi-dimensional leadership. And it’s evolving so quickly and the governance is evolving, even though it is governance lags far behind AI. It’s very complex.

And this is what I call the AI ​​governance gap: there are technologies that are evolving much faster than governance can keep up.

And you have very limited internal expertise, especially in regulated sectors such as healthcare and government. It will be a real challenge for those organizations to quickly adopt AI as soon as it comes out, especially if they don’t have guardrails in place. So we’ve seen a lot of memos in the last year at academic medical centers and other organizations saying, please don’t use ChatGPT until there’s a clearly established policy on where you can use it.

Some people go ahead and use it anyway. It is not something that organizations always have control over. It’s certainly better to have these policies ready in advance so that you understand what types of applications and activities, potential risk impacts, or threat vectors you’re likely to encounter.

I think cybersecurity is probably another thing I should mention for people who are interested in this role. Cybersecurity is becoming increasingly important, especially in healthcare. Many threat actors view healthcare as a somewhat soft target that is very data-rich and contains highly valuable data that can be used in many different branching applications, from ransomware to exploiting insider threat capabilities with additional data.

So I think understanding the intersection of AI and cybersecurity is also very important.

I recommend that people just educate themselves on these different dimensions, try to develop as many skills as possible, gain as much understanding as possible in those areas, read the news, do your best to keep up and work with good people.

It is difficult for anyone to be an in-depth expert in any of these areas. So it’s really good to collaborate and have good communities where there’s peer-to-peer collaboration between executives. So if you find yourself in an AI leadership role in your organization, you’ll have the skills and broad perspective needed to help the organization bridge that gap in AI management.

For valuable BONUS CONTENT not included in this article, click here to watch the HIMSS TV video of this interview.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Related Post