NIST Unveils New Open Source Platform for AI Security Assessments
The free-to-download tool, called Dioptra, is designed to help artificial intelligence developers understand some of the unique data risks of AI models and help them “mitigate those risks while supporting innovation,” the NIST director said.
Nearly a year after the Biden administration issued its executive order on the safe, secure, and trustworthy development of AI, the National Institute of Standards and Technology has made available a new open-source tool to test the safety and security of AI and machine learning models.
WHY IT MATTERS
The new platform, known as Dioptersets an imperative in the White House EO, stipulating that NIST will take an active role in helping to test algorithms.
“One of the vulnerabilities of an AI system is the model at its core,” NIST researchers explain. “By exposing a model to large amounts of training data, it learns to make decisions. But if opponents poison the training data Inaccuracies – for example, entering data that could cause the model to misidentify stop signs as speed signs – could cause the model to make incorrect, potentially disastrous decisions.”
The goal is to help organizations in healthcare and other sectors better understand their AI software and assess how well it performs in the face of “different adversarial attacks,” NIST said.
The open source tool – available for free download – can help healthcare providers, other companies, and government agencies evaluate and verify AI developers’ promises about the performance of their models.
“Dioptra does this by allowing the user to determine which types of attacks cause the model to perform less effectively and by quantifying the performance degradation so that the user can learn how often and under what conditions the system would fail.”
THE BIGGER TREND
In addition to unveiling the Dioptra platform, NIST’s AI Safety Institute also published a new draft guidance this week on Managing Abuse Risks for Dual-Use Foundation Models.
Such models – known as dual-use because they have “potential for both benefit and harm” – can pose a safety risk if used in the wrong way or by the wrong people. The new proposed guidance outlines “seven key approaches to reducing the risks of models being misused, along with recommendations on how to implement them and how to be transparent about their implementation.”
In addition, NIST has also published three definitive documents on AI safety, focusing on limiting the risks of generative KI, mitigating threats to the data used to train AI systems And global commitment to AI standards.
In addition to the executive order on AI, there have been recent efforts at the federal level to establish safeguards for AI in healthcare and elsewhere.
This includes a major realignment of agencies within the Department of Health and Human Services, focused on “mission-focused technology, data and AI policies and activities.”
The White House also announced new rules governing the use of AI in federal agencies, including the CDC and VA hospitals.
Meanwhile, NIST is also hard at work on other AI and security initiatives, such as privacy protection guidelines for AI-driven research and a major recent update to its groundbreaking Cybersecurity Framework.
ON THE RECORD
“Despite all of its potentially transformative benefits, generative AI also carries risks that are significantly different from those seen with traditional software,” NIST Director Laurie E. Locascio said in a statement. “This guidance and testing platform will educate software makers about these unique risks and help them develop ways to mitigate them while supporting innovation.”
“AI is the defining technology of our generation, so we are racing to keep pace and help ensure the safe development and deployment of AI,” added U.S. Secretary of Commerce Gina Raimondo. “(These) announcements demonstrate our commitment to giving AI developers, implementers, and users the tools they need to safely harness the potential of AI, while minimizing the risks associated with it. We have made great progress, but much work remains.”
Mike Miliard is Editor-in-Chief of Healthcare IT News
Email the author: mike.miliard@himssmedia.com
Healthcare IT News is a publication of HIMSS.
The HIMSS AI in Healthcare Forum will take place September 5-6 in Boston. More information and registration.