Safe and fair AI needs guardrails against legislation and people involved
Healthcare organizations have sometimes been slow to adopt new artificial intelligence tools and other leading innovations due to legitimate concerns about security and transparency. But to improve care quality and patient outcomes, healthcare needs these innovations.
However, it is imperative that they are applied correctly and ethically. Just because a generative AI application can pass a medical school test does not mean it is ready to become a practicing doctor. Healthcare must leverage the latest advances in AI and large language models to put the power of these technologies in the hands of medical experts so they can deliver better, more accurate, and safer care.
Dr. Tim O’Connell is a practicing radiologist and CEO and co-founder of emtelligent, a developer of AI-powered technology that transforms unstructured data.
We spoke with him to gain a better understanding of the situation The importance of AI guardrails in healthcare as it helps modernize medical practice. We also talked about how algorithmic discrimination can perpetuate health inequalities, legislative action to set AI safety standards – and why people who are in the know are essential.
Q. What is the importance of AI guardrails in healthcare as the technology helps modernize medical practice?
A. AI technologies have introduced exciting possibilities for healthcare providers, payers, researchers and patients, offering opportunities for better outcomes and lower healthcare costs. To realize the full potential of AI, especially for medical AI, we must ensure that healthcare professionals understand both the capabilities and limitations of these technologies.
This includes awareness of risks such as non-determinism, hallucinations and problems with reliably referencing source data. Healthcare professionals must be equipped not only with knowledge of the benefits of AI, but also with critical insight into its potential pitfalls, so that they can use these tools safely and effectively in diverse clinical settings.
It is critical to develop and adhere to a set of thoughtful principles for the safe and ethical use of AI. These principles should include addressing concerns around privacy, security and bias, and they should be rooted in transparency, accountability and fairness.
Reducing bias requires training AI systems on more diverse data sets that take into account historical differences in diagnoses and health outcomes, while also shifting training priorities to ensure AI systems are aligned with real healthcare needs.
This focus on diversity, transparency and robust oversight, including the development of guardrails, ensures that AI can be a highly effective tool that remains resilient to errors and helps drive meaningful improvements in healthcare outcomes.
This is where guardrails – in the form of well-designed regulations, ethical guidelines and operational safeguards – become critical. These safeguards help ensure that AI tools are used responsibly and effectively, addressing concerns around patient safety, data privacy and algorithmic bias.
They also provide mechanisms for accountability, allowing any errors or unintended consequences of AI systems to be traced to specific decision points and corrected. In this context, guardrails act as both protective measures and enablers, allowing healthcare professionals to rely on AI systems while protecting themselves from their potential risks.
Q. How can algorithmic discrimination perpetuate health inequalities, and what can be done to solve this problem?
A. If the AI systems we rely on in healthcare are not properly developed and trained, there is a very real risk of algorithmic discrimination. AI models trained on datasets that are not large or diverse enough to represent the full spectrum of patient populations and clinical characteristics can and do produce biased results.
This means the AI may provide less accurate or less effective care recommendations to underserved populations, including racial or ethnic minorities, women, individuals from lower socioeconomic backgrounds, and individuals with very rare or unusual conditions.
For example, if a medical language model is trained primarily on data from a specific demographic group, it may be difficult to accurately extract relevant information from clinical notes that reflect different medical conditions or cultural contexts. This could lead to missed diagnoses, misinterpretations of patient symptoms, or ineffective treatment recommendations for populations that the model was not trained to adequately recognize.
In fact, the AI system could perpetuate the very inequities it is intended to alleviate, especially for racial minorities, women, and patients from lower socioeconomic backgrounds, who are often already underserved by traditional healthcare systems.
To address this problem, it is crucial to ensuring that AI systems are built on large, highly varied data sets that capture a wide range of patient demographics, clinical presentations and health outcomes. The data used to train these models must be representative of different races, ethnicities, genders, ages, and socioeconomic statuses to avoid giving the system’s results a narrow view of healthcare.
This diversity allows models to perform accurately across diverse populations and clinical scenarios, minimizing the risk of perpetuating bias and ensuring AI is safe and effective for everyone.
Q. Why are humans essential for AI in healthcare?
A. While AI can process massive amounts of data and generate insights at speeds far beyond human capabilities, it lacks a nuanced understanding of complex medical concepts that are essential to delivering high-quality care. Humans in the loop are essential to AI in healthcare because they provide the clinical expertise, oversight, and context needed to ensure algorithms perform accurately, safely, and ethically.
Consider one use case: extracting structured data from clinical notes, laboratory reports, and other healthcare documents. Without human physicians to guide development, training, and ongoing validation, AI models risk missing important information or misinterpreting medical jargon, abbreviations, or context-specific nuances in clinical language.
For example, a system may incorrectly flag a symptom as significant or miss crucial information included in a doctor’s note. Human experts can help refine these models so they can correctly capture and interpret complex medical language.
From a workflow perspective, people in the loop can help interpret and take action on AI-driven insights. Even as AI systems generate accurate predictions, healthcare decisions often require a level of personalization that only physicians can provide.
Human experts can combine AI results with their clinical experience, knowledge of patients’ unique circumstances, and understanding of broader healthcare trends to make informed, compassionate decisions.
Q. What is the status of legislative action to establish AI safety standards in healthcare, and what needs to be done by lawmakers?
A. Legislation to establish AI safety standards in healthcare is still in its early stages, although there is increasing recognition of the need for comprehensive guidelines and regulations to ensure the safe and ethical use of AI technologies in clinical settings.
Several countries have begun to introduce frameworks for AI regulation, many of which build on foundational, trustworthy AI principles that emphasize safety, fairness, transparency and accountability, which are beginning to shape these conversations.
In the United States, the Food and Drug Administration has introduced a regulatory framework for AI-based medical devices, specifically software as a medical device (SaMD). The FDA’s proposed framework takes a “total product lifecycle” approach, which aligns with the principles of trustworthy AI by emphasizing continuous monitoring, updates, and real-time evaluation of AI performance.
While this framework focuses on AI-enabled devices, it has not yet fully addressed the challenges posed by non-device AI applications that handle complex clinical data.
Last November, the American Medical Association published proposed guidelines for the use of AI in a manner that is ethical, fair, responsible and transparent.
In its “Principles for Augmented Intelligence Development, Deployment and Use,” the AMA reinforces its position that AI enhances rather than replaces human intelligence, stating that it is “important that the physician community help guide the development of these tools in a way that best meets people’s needs. both the needs of physicians and patients, and helps define their own organization’s risk tolerance, especially where AI impacts direct patient care.”
By fostering this collaboration between policymakers, healthcare professionals, AI developers and ethicists, we can create regulations that promote both patient safety and technological progress. Lawmakers must strike a balance, creating an environment where AI innovation can thrive while ensuring that these technologies meet the highest standards of safety and ethics.
This includes developing regulations that allow flexible adaptation to new AI developments, ensuring AI systems remain flexible, transparent and responsive to the changing needs of healthcare.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication