Warning: Undefined array key 0 in /var/www/vhosts/nybreaking.com/httpdocs/wp-content/plugins/monumetric-ads/libs/advertisement.php on line 49

The White House outlines new rules for AI use in federal agencies

The Biden administration on Thursday announced new government-wide policies from the White House Office of Management and Budget that govern the use of artificial intelligence across federal agencies, including many focused on health care.

WHY IT MATTERS
The goal of the new policy, which builds on President Biden’s sweeping executive order from October, is to “mitigate the risks of artificial intelligence and realize its benefits,” the White House said in a statement. fact sheet.

The OMB says that by December 1, 2024, federal agencies must implement concrete safeguards whenever they use AI in a way that “could affect the rights or safety of Americans.”

Such safeguards include a wide range of “mandatory actions to reliably assess, test, and monitor the impact of AI on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how government uses AI.” “

If an agency cannot demonstrate that these safeguards are in place, it must “discontinue use of the AI ​​system unless the agency’s leadership justifies why doing so would increase risks to safety or rights generally or create an unacceptable barrier would affect critical activities of the agency.” according to the White House.

The new rules emphasize AI governance and algorithm transparency – and seek to find a path forward for innovation that takes advantage of the technology’s benefits while protecting against its potential harms.

For example, OMB policy requires all federal agencies to designate Chief AI Officers, who will coordinate the use of AI within their agencies.

They should also establish AI governance councils to coordinate and govern the use of AI within their own specific agencies. (The Departments of Defense, Veterans Affairs, and others have already done this.)

The policy also requires federal agencies to improve public transparency in their use of AI, requiring them to:

  • Release extended annual inventories of their AI use cases, including identifying use cases that impact rights or safety and how the agency addresses relevant risks.

  • Report statistics on the agency’s AI use cases that are not included in the public inventory due to their sensitivity.

  • Inform the public of any AI that is exempt from compliance with any part of OMB policy by a waiver, along with the justifications.

  • Release government-owned AI code, models, and data if such release does not pose a risk to the public or government operations.

The White House says that rather than being prohibitive, the OMB rules are intended to promote safe and responsible innovation and “remove unnecessary barriers to it.”

For example, the new fact sheet mentions AI’s potential to advance public health — noting that the Centers for Disease Control and Prevention uses AI to predict the spread of disease and track the illicit use of opioids, while the Center for Medicare and Medicaid Services uses the technology to reduce waste and identify discrepancies in drug costs.

The policy also aims to strengthen the AI ​​workforce, through projects such as a National AI Talent Surge – which aims to hire 100 AI professionals by this summer to drive the safe use of AI across government as well as an additional $5 million to launch a government-wide AI training program – involving 7,500 people from 85 federal agencies by 2023.

THE BIG TREND
In October 2023, the White House published President Biden’s monument executive order on AI, a comprehensive and multi-faceted document that outlines ways to prioritize developing the technology that is “safe, secure and reliable.”

Among its many provisions, the EO called on the U.S. Department of Health and Human Services to develop and implement a mechanism to collect reports of “harm or unsafe health care practices” – and take action to address them where possible.

ON THE RECORD
“All government, civil society and private sector leaders have a moral, ethical and societal obligation to ensure that artificial intelligence is adopted and developed in a way that protects the public from potential harm while ensuring that everyone can enjoy its benefits. take full advantage,” Vice President Kamala Harris said at a press call Thursday about the new OMB rules

“If government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” Harris said, giving an example: “If the Veterans Administration wants to use AI in VA hospitals to help doctors diagnose patients, they would first have to demonstrate that AI does not produce racially biased diagnoses.”

The American people, she added, “have a right to know that when and how their government uses AI, it is used responsibly. And we want to do it in a way that holds leaders accountable for the responsible use of AI. AI.”

Mike Miliard is editor-in-chief of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.