How to apply responsible artificial intelligence in healthcare

In fact, responsible artificial intelligence is the only way to implement AI technology in their hospital or healthcare system. It is crucial that AI, no matter how complex and important, is reliable.

Anand Rao is a service professor at Heinz College of Carnegie Mellon University. He is an expert in responsible AI, the economics of AI and generative AI. Throughout his 35-year consulting and academic career, he has focused on innovation, business and societal adoption of data, analytics and artificial intelligence.

Previously, Rao was the global artificial intelligence leader at consulting giant PwC, a partner in its data, analytics and AI practice, and the innovation leader for AI in PwC’s products and technology segment.

We interviewed Rao to talk about responsible AI, how responsible AI should be applied in healthcare, how responsible AI can be combined with generative AI specifically, and what society needs to understand about adopting responsible AI.

Q. Please define what responsible AI is, from your point of view.

A. Responsible AI is the research, design, development and deployment of AI that is safe, secure, privacy-protecting or enhancing, transparent, accountable, interpretable, explainable, bias-aware and fair. This can really be thought of as three successive levels of AI:

  1. Safe and secure AI. This is the minimum bar where ‘AI does no harm’. It includes not causing physical or emotional harm, actual if necessary, and protecting against hostile attacks.
  2. Reliable AI. This is the next level of “AI doing good.” It includes AI that is responsible, interpretable and explainable. It includes both building AI systems and operating AI systems.
  3. Favorable AI. This is the next level where ‘AI does good for everyone’. It includes AI that is aware of biases and built in a way that is at least fair about one or more dimensions of fairness.

Q. How should responsible AI be applied in healthcare? Healthcare is a very different sector than others. Lives are constantly at stake.

A. Given the high stakes in healthcare, responsible AI in healthcare should primarily be applied to augment human decision-making, rather than replacing human tasks or decision-making. “Human-in-the-loop” should be an essential feature for most, if not all, AI implementations in healthcare.

Furthermore, AI healthcare systems must comply with existing privacy laws, be thoroughly tested, evaluated, verified and validated using the latest techniques before they can be deployed on a large scale.

Q. Generative AI is one of your specialties. How do you concretely combine responsible AI with generative AI?

A. When it comes to generative AI, it brings with it more powerful and complex technology that can potentially cause more damage than traditional AI. Generative AI can potentially produce wrong results, with a confident tone.

It can produce harmful and toxic language and is more complex to explain or reason with. As a result, responsible AI for generative AI must consider expanded governance and oversight, as well as rigorous testing under different contexts.

Q. One of your areas of focus is the societal adoption of artificial intelligence. What should society understand about adopting responsible AI, especially when people go to a doctor?

A. With the widespread use of generative AI, the public is increasingly using generative AI to obtain medical advice. Since it is difficult to determine when the generative AI is correct or when it is wrong, there could be disastrous consequences for patients or caregivers who do not check with their doctors.

Educating the public and healthcare providers about the negative consequences of generative AI is essential to ensure responsible use of generative AI.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.