Generative AI is this CISO’s ‘very enthusiastic intern’

Editor’s Note: This is part two of a two-part interview on AI and cybersecurity with David Heaney of Mass General Brigham. Click here to read part one.

In the first installment of this in-depth interview, Mass General Brigham Chief Information Security Officer David Heaney explained defensive and offensive applications of artificial intelligence in healthcare. He said that understanding the environment, knowing where one’s controls are deployed, and being good at the basics are much more important when AI is involved.

Today, Heaney discusses best practices healthcare CISOs and CIOs can use to secure AI deployments, how his team is deploying AI, how he educates his team on securing with and against AI, the human side of AI and cybersecurity, and the types of AI he uses to combat cyberattacks.

Q. What are some best practices that CISOs and CIOs in healthcare can use to secure the use of AI? And how are you and your team using them at Mass General Brigham?

A. It’s important to start with the way you’re framing the question, because you have to understand that these AI capabilities are going to bring about huge changes in how we care for patients and how we discover new approaches and so much more in our industry.

It’s really about how we support that and how we help secure that. As I said in part one, it’s really important to make sure that we get the basics right. So if there’s an AI-driven service that’s using our data or running in our environment, we have the same requirements for risk assessments, for business partner agreements, for all the other legal agreements that we would have with non-AI services.

Because at some level we’re talking about another app, and it needs to be managed just like any other app in the environment, including restrictions against the use of unapproved applications. And that’s not to say that there aren’t AI-specific considerations that we would want to address, and there are a few that come to mind. In addition to the standard legal agreements that I just mentioned, there are certainly additional considerations for data usage.

For example, do you want your organization’s data to be used to train your vendor’s downstream AI models? The security of the AI ​​model itself is important. Organizations should consider options around continuous validation of the model to ensure it is delivering accurate outputs in all scenarios, and that can be part of the AI ​​governance I mentioned in part one.

There’s also adversarial testing of the models. If we put in bad inputs, does that change the way the output comes out? And then one of the basic areas that I’ve seen change a little bit in terms of importance in this environment is the ease with which a lot of these tools are adopted.

An example of this: Take a look at meeting note services like Otter AI or Read AI, and there are many more. But these services are driven to make adoption easy and frictionless, and they’ve done a great job of that.

The concerns about the use of these services and the data that they can access don’t change. But the combination of the ease of use for our end users and, frankly, the coolness of these and other applications, makes it a really important focus to look at how you put different applications, particularly AI-driven applications, into practice.

Q. How have you gotten your team up to speed when it comes to securing with and against AI? What’s the human element at play here?

A. It’s huge. And one of my core values ​​for my security team is curiosity. I would say it’s the single skill behind everything we do in cybersecurity. It’s the thing where you see something that’s a little bit funny and you say, “I wonder why that happened?” And you start digging into it.

That’s the beginning of almost every improvement that we make in the industry. So, to achieve that goal, a big part of the answer is having curious team members who get excited about this topic and want to learn more about it themselves. And they just go out and play with some of these tools.

I try to set an example in the region by sharing how I’ve used different tools to make my job easier. But nothing replaces that curiosity. Within MGB, within our digital team, we try to dedicate one day a month to learning and provide access to different training services with relevant content in the space. But the challenge with that is really that technology is changing faster than training can keep up.

So really, nothing replaces just getting out there and playing with the technology. But also, maybe with a bit of irony, one of my favorite The use for generative AI is for learning. And one of the things I do is I use a prompt that says something like, “Create a table of contents for a book called X, where X is the topic I want to learn more about.” And I usually also specify a little bit about what the author is like and what the purpose of the book is.

That creates a great overview of how you can learn about that topic. And then you can ask your AI friend, “Hey, can you tell me more about chapter one? And what does that mean?” Or you can go to other resources or other forums and find the relevant content there.

Q. Without giving away any secrets, what types of AI are you using to combat cyberattacks? Maybe you can explain more broadly how these types of AI are meant to work and why you like them?

A. Our overall digital strategy at MGB is really focused on leveraging platforms from our technology vendors. To kind of build on the vendor question from part one, our focus is on working with these companies to develop the most valuable capabilities, many of which will be AI-driven.

And to give you a sense of what that looks like, at least in a general sense, not to give away the goose that lays the golden eggs, so to speak, our endpoint protection tools use a variety of AI algorithms to identify potentially malicious behavior. They then all send logs from all of those endpoints to a central collection point where there’s a combination of both rules-based and AI-based analytics looking for broader trends.

So not just on one system, but across the environment. Are there any trends that indicate increased risk? We have an Identity Governance Suite, and that’s the tooling that’s used to grant access and remove access in the environment. And that suite of tools has several built-in capabilities to identify potential risks and see combinations of access that may already be present or even view access requests as they come in to prevent us from granting that access in the first place.

So that’s the world of the platforms themselves and the technology that’s built into them. But beyond that, going back to how we can use generative AI in some of these areas, we’re using that to accelerate all sorts of tasks that we used to do manually.

The team has, I can’t put a number on it, but I will say it has saved a tremendous amount of time by Using generative AI to write custom scripts for triage, for forensics, for remediation. It’s not perfect. The AI ​​gets us to, I don’t know, 80% complete, but our analysts are finalizing the script and they’re doing it much faster than if they were running it or creating it from scratch.

In the same way, we use some of these AI tools to create queries that feed into our other tools. We get our junior analysts up to speed much faster by giving them access to these tools so that they can use the various other technologies that we have more effectively.

Our senior analysts are simply more efficient. They already know how to do a lot of this stuff, but it’s always better to start with 80% than zero.

Generally, I would describe it as my really enthusiastic intern. I can ask it to do anything and it will come back with something between a really good starting point and possibly a great and complete answer. But I certainly wouldn’t use that answer without doing my own checks and finishing it first.

CLICK HERE to watch the video of this interview. It contains BONUS CONTENT not found in this story.

Editor’s Note: This is the tenth and final installment in a series of articles featuring top voices in healthcare IT discussing the use of artificial intelligence. Read the other installments:

Follow Bill’s HIT reporting on LinkedIn: Bill Siwicki
Send him an email: bsiwicki@himss.org
Healthcare IT News is a publication of HIMSS Media.

Related Post