As the rush to implement AI in healthcare continues, explainability is key
Artificial intelligence (AI) is gaining a lot of attention in healthcare. Dozens of hospitals and healthcare systems have already deployed the technology with great success, mostly in administrative areas.
But success with AI in healthcare β especially in the clinical arena β cannot be achieved without addressing growing concerns about the transparency and explainability of models.
In a field where decisions can be life-saving, being able to understand and trust AI decisions is not just a technical necessity β it is an ethical must.
Neeraj Mainkar is vice president of software engineering and advanced technology at Proprio, which develops immersive tools for surgeons. He has significant expertise in applying algorithms to healthcare. Healthcare IT News spoke with him about explainability and the need for patient safety and trust, error detection, regulatory compliance, and ethical standards in AI.
Q. What does explainability mean in the domain of artificial intelligence?
A. Explainability refers to the ability to understand and clearly articulate how an AI model arrives at a particular decision. In simpler AI models, such as decision trees, this process is relatively easy because the decision paths can be easily traced and interpreted.
But as we move into the realm of complex deep learning models, which consist of countless layers and intricate neural networks, it becomes increasingly difficult to understand the decision-making process.
Deep learning models operate with a large number of parameters and complex architectures, making it nearly impossible to directly trace their decision paths. Reverse engineering these models or investigating specific issues in the code is extremely challenging.
When a prediction does not match expectations, it is difficult to determine the exact reason for the discrepancy due to the complexity of the model. This lack of transparency means that even the creators of these models can struggle to fully explain their behavior or outcomes.
The opacity of Complex AI systems pose significant challenges, particularly in sectors such as healthcare where understanding the reasoning behind a decision is crucial. As AI becomes increasingly integrated into our lives, the demand for explainable AI grows. Explainable AI aims to make AI models more interpretable and transparent, so that their decision-making processes can be understood and trusted.
Q. What are the technical and ethical implications of AI explainability?
A. Striving for explainability has both technical and ethical implications to consider. On the technical side, simplifying models to improve explainability can reduce performance, but it can also help AI engineers debug and improve algorithms by giving them a clear understanding of the origins of the outputs.
Ethically, explainability helps identify biases within AI models and promote fairness in treatment, eliminating discrimination against smaller, less-represented groups. Explainable AI also ensures that end users understand how decisions are made, while protecting sensitive information and remaining HIPAA-compliant.
Q. Can you discuss error identification in relation to explainability?
A. Explainability is a key component of effective identification and correction of errors in AI systems. The ability to understand and interpret how an AI model arrives at its decisions or outputs is necessary to effectively locate and correct errors.
By tracing decision paths, we can determine where the model may have gone wrong, allowing us to understand the βwhyβ behind an incorrect prediction. This understanding is crucial for making the necessary adjustments to improve the model.
Continuous improvement of AI models depends heavily on understanding their mistakes. In healthcare, where patient safety is paramount, the ability to debug and refine models quickly and accurately is vital.
Q. Can you elaborate on regulatory compliance regarding explainability?
A. Healthcare is a highly regulated industry with strict standards and guidelines that AI systems must adhere to to ensure safety, efficacy, and ethical use. Explainability is important to achieving compliance because it meets several key requirements, including:
- Transparency. Explainability ensures that every decision made by the AI ββcan be traced back and understood. This transparency is necessary to maintain trust and ensure that AI systems operate within ethical and legal boundaries.
- Validation. With Explainable AI, you can demonstrate that models have been thoroughly tested and validated and that they perform as expected in a variety of scenarios.
- Reducing prejudice. Explainability allows biased decision-making patterns to be identified and reduced, so that models do not unfairly disadvantage any specific group.
As AI continues to develop, the emphasis on explainability will continue to be a key aspect of regulatory frameworks to ensure that these advanced technologies are used responsibly and effectively in healthcare.
Q. And what role do ethical standards play in explaining things?
A. Ethical standards play a fundamental role in the development and deployment of responsible AI systems, especially in sensitive and important sectors such as healthcare. Explainability is inherently linked to these ethical standards, ensuring that AI systems operate transparently, fairly and responsibly, in line with the ethical core principles in healthcare.
Responsible AI means operating within ethical boundaries. The push for advanced explainability in AI increases trust and reliability, ensuring that AI decisions are transparent, justifiable, and ultimately beneficial to patient care. Ethical standards guide the responsible disclosure of information, protecting user privacy, upholding legal requirements such as HIPAA, and encouraging public trust in AI systems.
Follow Bill’s HIT reporting on LinkedIn: Bill Siwicki
Send him an email: bsiwicki@himss.org
Healthcare IT News is a publication of HIMSS Media.