2023 was the year marked by AI. With the rise of the major general-purpose language models ChatGPT, Claude, and Bard, AI has become tangible and accessible to millions of people. There was no shortage of excitement, but there was also a call for regulation of this powerful technology.
State of AI regulation in healthcare
The EU AI law was drafted back in 2021, long before the availability of a general-purpose large-language model. By December 2023, a consensus on legislation appeared to have been reached. On the other side of the US, the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Health and Human Services (HHS) calls for the creation of an AI Task Force to develop a regulatory action plan for predictive and generative AI technologies in healthcare by 2024.
Regulation is always one step behind industry and setting rules for healthcare is not possible without the tech industry and medical recommendations. More and more associations seem to arise every day. There’s the US Coalition for Health AI (CHAI™) – a community of academic healthcare systems, organizations and expert practitioners of artificial intelligence (AI) and data science. It proposed a setup of dedicated assurance laboratories that would allow health systems, developers and tool providers to submit their AI solutions for evaluation. The New England Journal of Medicine launched NEJM AI, a monthly, online-only publication for evaluating applications of artificial intelligence in clinical medicine. The Global Agency for Responsible AI in Health aims to harmonize global healthcare standards as defined by the WHO. The organization aims to connect various regulatory agencies from around the world and create early warning systems. These would notify the network if unintended effects of AI are detected anywhere in the world. The ideas are there, but it will take some time for them to take shape.
AI regulation in the EU will be felt after 2026
The official EU legislative document on AI is expected in early 2024. Until then, experts are cautious and also critical, especially when it comes to open-source AI and the regulation of proprietary models. “Our information about EU AI regulations comes from leaks in the negotiation process, which was characterized by opacity and lack of oversight of democracy. Right now, anti-open source lobbyists are trying to tamper with the law after the deal, all because they’re angry about the rumor that open science-based AI R&D may be exempt from regulation. I and most other AI experts argue that regulation should only play a role at the application level. If we go further, it would suppress scientific freedom and undermine the European advantage,” says Bart de Witte, founder of HIPPO AI Foundationthat advocates the development of open-source AI to prevent the consolidation of power in the hands of global technology companies.
The EU AI Act could be disrupted by countries such as France and Germany that prefer self-regulation and want changes in legislation. Ricardo Baptista Leite, CEO of The Global Agency for Responsible AI in Health, sees this as problematic: “In very sensitive areas like health, I don’t believe this is necessarily the best approach. It depends on the application of the AI, but ultimately these countries open up a whole can of worms of discussion that could undermine the further process.”
The upcoming European elections could also shape the conversation, as shifts in political power could influence the implementation of the law. “We have the country-level approach that with the rise of extreme nationalism could lead to very, very bad policies that could undermine all attempts to find some level of harmonization. Harmonization is crucial to ensure that these technologies can be used in a safe way, but also to ensure that we get the most out of the technology’s potential,” says Ricardo Baptista Leite, explaining that although the AI Act If we take a step forward, the real impact on sectors, especially healthcare, will only be felt around 2026. The next two years will be critical in balancing the risks and reaping the benefits that AI brings to healthcare systems and patient care.
How can the industry scale up?
The Global Agency for Responsible AI in Health will advocate for the same principles we have for drug approval and health technology assessment for use in AI in healthcare. In the future, the Agency plans to create a comprehensive repository for AI solutions. This platform will focus on health AI and provide an online public database. It will be a global showcase with AI technologies validated by various countries.
More of these topics will be discussed on HIMSS24 Europewhich will have a dedicated AI track.