The Cybersecurity Infrastructure Security Agency promises to go 'left of the tree' and monitor artificial intelligence software development in a new alert series, offering lessons to be learned, calling for 'radical transparency' from the software industry. and take specific actions for them. The goal is to push the industry to evaluate software development lifecycles in relation to customer security outcomes.
CISA's new awareness campaign also follows the publication of voluntary global guidelines for the development of secure AI systems.
WHY IT MATTERS
The first Secure by Design alert, which CISA issued on November 29, highlights vulnerabilities in the web management interface. It asks software manufacturers to publish a 'secure-by-design' roadmap to protect their customers from malicious cyber activity.
“Software manufacturers must adopt the principles set out in Shifting the balance of cybersecurity risks,” the agency said.
Such a roadmap “shows that they are not simply implementing tactical controls, but are rethinking their role in keeping customers safe.”
Announcing the series on the CISA blogEric Goldstein, executive assistant director for cybersecurity, and Bob Lord, senior technical advisor, shed some light on why the agency is doing this.
“By identifying the common patterns in software design and configuration that often lead to customer organizations being compromised, we hope to draw attention to areas that need urgent attention,” they wrote.
In short, CISA said it wants to push the industry to evaluate software development lifecycles based on how they relate to “customer security outcomes.”
For the healthcare industry, the consequences of third-party software vulnerabilities are disastrous for individual healthcare systems, as well as the healthcare industry as a whole. According to one of the researchers, half of the ransomware attacks between 2016 and 2021 disrupted healthcare JAMA study
Cybersecurity leaders have long been concerned with being vigilant about cyber hygiene and building a security-focused culture in healthcare organizations – a strategy that protects software users when products are deployed and beyond.
But when it comes to AI, CISA and its partner agencies, both domestic and international, want to work further upstream.
“We need to identify the recurring classes of defects that software manufacturers must address by conducting a root cause analysis and then making systemic changes to eliminate these classes of vulnerability,” Goldstein and Lord wrote.
Global cybersecurity agencies are all looking to developers of systems that use AI to make informed cybersecurity decisions at every stage of the development process. They developed new guidelines – led by CISA and the Department of Homeland Security, along with the United Kingdom's National Cyber Security Center.
“We are at a turning point in the development of artificial intelligence, which is perhaps the most consequential technology of our time. Cybersecurity is the key to building AI systems that are safe, secure, and trustworthy,” said Secretary of Homeland Security Alejandro N. Mayorkas, in a statement on the Guidelines for Developing Secure AI Systems, released last week .
“By integrating secure by design principles, these guidelines represent a historic agreement that requires developers to invest and protect customers at every step of a system's design and development.”
“The publication of the Guidelines for the Development of Secure AI Systems marks an important milestone in our collective commitment – from governments around the world – to ensure the development and deployment of artificial intelligence capabilities that are secure by design,” added CISA Director Jen Easterly. “As countries and organizations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global commitment to advancing transparency, accountability and safe practices.”
The guidelines divide the AI systems development lifecycle into four parts: secure design, secure development, secure implementation, and secure operation and maintenance.
“We know that AI is developing at a phenomenal pace and there is a need for coordinated international action, across governments and industry, to keep pace,” said Lindy Cameron, CEO of NCSC.
“These guidelines mark an important step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development, but a core requirement throughout the process.”
THE BIG TREND
In May, the G7, Canada, France, Germany, Italy, Japan, Britain and the United States called for the adoption of international technical standards for AI and agreed on a AI Code of Conduct for Business in October.
That month, US President Joe Biden also issued an executive order directing DHS to promote the adoption of AI safety standards worldwide and calling on US Health and Human Services to develop an AI safety program.
Last week, CISA also released its report Roadmap for Artificial Intelligencewhich aligns with Biden's national strategy to advance the beneficial use of AI to enhance cybersecurity capabilities, ensure the cybersecurity of AI systems, and defend against malicious use of AI to threaten critical infrastructure, including healthcare.
ON THE RECORD
“We need to uncover the ways in which customers routinely miss opportunities to deploy software products with the right settings to reduce the likelihood of compromise,” Goldstein and Lord wrote in the CISA blog. “Such recurring patterns should lead to improvements in the product that make safe settings the standard, not stronger advice to customers in 'surfacing guides'.”
Andrea Fox is editor-in-chief of Healthcare IT News.
Email: afox@himss.org
Healthcare IT News is a HIMSS Media publication.