AI companies will need to start reporting their safety tests to the US government

WASHINGTON — The Biden administration will begin implementing a new requirement for developers of large artificial intelligence systems to disclose their security test results to the government.

The White House AI Council will meet Monday to review progress made on the executive order President Joe Biden signed three months ago to manage the rapidly evolving technology.

Chief among the order’s 90-day objectives was a mandate under the Defense Production Act that AI companies share vital information with the Commerce Department, including security testing.

Ben Buchanan, the White House special adviser on AI, said in an interview that the administration wants “to know that AI systems are safe before they are released to the public – the president has made it very clear that companies bar must meet.”

The software companies commit to a set of categories for security testing, but companies are not yet required to meet a common standard for testing. The government’s National Institute of Standards and Technology will develop a uniform framework for assessing security as part of the order Biden signed in October.

AI has emerged as a leading economic and national security consideration for the federal government given the investments and uncertainties caused by the launch of new AI tools like ChatGPT that can generate text, images and sounds. The Biden administration is also looking at legislation from Congress and working with other countries and the European Union on rules for governing the technology.

The Commerce Department has developed a draft rule on US cloud companies supplying servers to foreign AI developers.

Nine federal agencies, including the Departments of Defense, Transportation, Treasury, and Health and Human Services, have completed risk assessments related to the use of AI in critical national infrastructure such as the electric grid.

The government has also scaled up hiring of AI experts and data scientists across federal agencies.

“We know AI has transformative effects and potential,” says Buchanan. “We’re not trying to rock the cart, but we’re trying to make sure that regulators are prepared to manage this technology.”