President Biden’s executive order on AI directs HHS to establish safety program

The Biden administration on Monday issued a so-called “landmark” executive order aimed at channeling the important promise and managing the many risks of artificial intelligence and machine learning.

WHY IT MATTERS
The broad EO aims to set new standards for AI safety and security, while providing guidance to ensure algorithms and models are fair, transparent and trustworthy.

As part of the Biden-Harris Administration’s Comprehensive Responsible Innovation Strategy, the Executive Order builds on previous actions taken by the President, including work that led to voluntary commitments from fifteen leading companies to ensure safe, secure, and reliable development of AI.

In addition to many regulations for safer and more standardized AI innovation, the decision includes some specific guidelines regarding algorithms used in healthcare that are designed to protect patients from harm.

The EO recognizes the potential of “responsible use of AI” to help advance healthcare delivery and drive the development of new and more affordable medicines and therapies.

But recognizing that AI “increases the risk of injuring, misleading, or otherwise harming Americans,” President Biden is also instructing the U.S. Department of Health and Human Services to establish a security program that will allow the agency to “receive reports from – and can take action’. remedy – harm or unsafe healthcare practices involving AI.”

Among other provisions, the order calls for a new pilot of the National AI Research Resource to catalyze innovation nationwide, combined with the promotion of policies to give small developers and entrepreneurs access to more technical assistance and resources.

It also aims to modernize and streamline visa criteria to increase the opportunities for highly skilled immigrants with expertise in critical areas to study and work in the United States.

The EO also contains numerous provisions to promote AI safety and security standards:

  • A requirement that developers of powerful AI systems share safety test results and other critical information with the federal government. In accordance with the Defense Production Act, it calls on all companies developing machine learning models that pose a potential risk to “national security, national economic security, or national public health and safety” to notify the government when they models are trained, and the results are shared. all red team safety tests.

  • The National Institute of Standards and Technology will set strict standards for testing to ensure safety before making them public, with the Department of Homeland Security applying these standards to critical infrastructure sectors and establishing the AI ​​Safety and Security Board.

  • In addition, agencies funding life science projects will establish standards designed to protect against the risks of using AI to develop hazardous biological materials by developing strong new standards for the screening of biological syntheses. as a condition for federal funding, creating strong incentives to ensure appropriate screening and manage the risks potentially exacerbated by AI.

On the privacy front, President Biden is calling on Congress to pass bipartisan legislation that will prioritize federal support to “accelerate the development and use of privacy-preserving techniques – including those that use advanced AI and that ensure that AI systems can be trained while maintaining privacy.” privacy of the training data.”

The EO also focuses on the consequences of AI for the workforce. It aims to develop “principles and best practices to reduce the harm and maximize the benefits of AI for workers by addressing job displacement, labor standards, equity, workplace health and safety, and data collection,” and calls on federal officials to report on the potential impacts of AI on the labor market, and to study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.

The White House order also aims to prevent algorithmic discrimination, in part through training, technical assistance and coordination between the Department of Justice and federal civil rights agencies on best practices for investigating and prosecuting civil rights violations involving AI.

THE BIG TREND
Since taking office, President Biden has been clear about the need to support healthcare information technology while maintaining safety and security around IT innovation.

The AI ​​Executive Order – which was developed after gathering feedback on AI R&D from a wide range of industry stakeholders – follows the White House’s privacy-focused AI Bill of Rights proposed a year ago.

It also falls in line with the White House’s similarly ambitious National Cybersecurity Strategy from earlier this year (as well as another plan for America’s cyber workforce).

ON THE RECORD
“The actions led by President Biden today are critical steps forward in America’s approach to safe, secure, and trustworthy AI,” the White House said in the executive order. “More action will be needed, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way on responsible innovation.”

Mike Miliard is editor-in-chief of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.

Related Post