President Joe Biden will sign a sweeping executive order on Oct. 30 to guide the development of artificial intelligence — requiring the industry to develop safety and security standards, introducing new consumer protections and giving federal agencies an expanded task list to oversee the rapidly advancing technology.
The decision reflects the government’s efforts to shape the way AI evolves in a way that maximizes its potential and limits its dangers. AI is a source of great personal interest for Mr. Biden, with the potential to impact the economy and national security.
White House Chief of Staff Jeff Zients recalled that Mr. Biden had issued a directive to his staff to take urgent action on the issue after identifying the technology as a top priority.
“We cannot move at a normal pace of government,” Mr. Zients said, citing the Democratic president. “We need to move as fast, if not faster, than the technology itself.”
Mr. Biden said the administration was late in addressing the risks of social media and now young people in the United States are struggling with related mental health issues. AI has the positive potential to accelerate cancer research, model the impacts of climate change, boost economic output, and improve government services, among other benefits. But it could also distort fundamental views of the truth with false images, increase racial and social inequality, and provide a tool for scammers and criminals.
The order builds on voluntary commitments already made by technology companies. It’s part of a broader strategy that administration officials say also includes congressional legislation and international diplomacy, a sign of the disruptions already caused by the introduction of new AI tools like ChatGPT that can generate new text, images and sounds.
Using the Defense Production Act, the order will require leading AI developers to share security test results and other information with the government. The National Institute of Standards and Technology must create standards to ensure AI tools are safe before they are released publicly.
The Department of Commerce will issue guidelines for labeling and watermarking AI-generated content to distinguish between authentic interactions and interactions generated by software. The order also addresses issues of privacy, civil rights, consumer protection, scientific research and employee rights.
An administration official who previewed the order during a Sunday call with reporters said the task lists within the order will be implemented and fulfilled within a period of 90 days to 365 days, with safety and security items facing the earliest deadlines . The official briefers briefed reporters on condition of anonymity as required by the White House.
On October 26, Mr Biden gathered his aides in the Oval Office to review and finalize the executive order, a 30-minute meeting that stretched to 70 minutes despite other pressing matters including the mass shooting in Maine, the war between Israel and Hamas. , and the selection of a new Speaker of the House of Representatives.
Mr. Biden was very curious about the technology during the months of meetings that led to the drafting of the order. His scientific advisory board focused on AI during two meetings and his cabinet discussed it during two meetings. The president has also pressed tech executives and civil society advocates at multiple meetings about the technology’s potential.
“He was as impressed and alarmed as anyone,” White House Deputy Chief of Staff Bruce Reed said in an interview. “He saw fake AI images of himself, of his dog. He saw how it can make for bad poetry. And he has seen and heard the incredible and terrifying technology of voice cloning, which can take three seconds of your voice and turn it into a completely fake conversation.”
The possibility of false images and sounds led the president to prioritize labeling and watermarking anything produced by AI. Mr. Biden also wanted to counter the risk that older Americans would receive a call from someone who sounded like a loved one, only to be scammed by an AI tool.
Meetings could last longer than planned, with Biden telling civil society advocates in a ballroom at the Fairmont Hotel in San Francisco in June: “This is important. Take as long as you need.”
The president also spoke to scientists and saw the benefit AI would create if it were used for good. He listened to a Nobel Prize-winning physicist talk about how AI could explain the origin of the universe. Another scientist showed how AI could model extreme weather, such as centennial floods, because the past data used to assess these events has lost its accuracy due to climate change.
The issue of AI was seemingly inescapable for Mr. Biden. One weekend he relaxed at Camp David by watching the Tom Cruise movie “Mission: Impossible – Dead Reckoning Part One.” The film’s villain is a sentient and rogue AI known as “the Entity” who sinks a submarine and kills the crew in the opening minutes of the film.
“If he hadn’t already been worried about what could go wrong with AI before that movie, he saw a lot more to worry about,” said Mr. Reed, who watched the movie with the president.
With Congress still in the early stages of debating AI safeguards, Mr. Biden’s order puts an American perspective as countries around the world rush to establish their own guidelines. After more than two years of deliberation, the European Union is finalizing a comprehensive set of regulations targeting the technology’s most risky applications. China, a major AI rival of the US, has also set some rules.
British Prime Minister Rishi Sunak also hopes to carve out a prominent role for Britain as an AI security hub this week at a summit where Vice President Kamala Harris plans to attend.
The US, particularly the West Coast, is home to many of the leading developers of advanced AI technology, including tech giants Google, Meta and Microsoft and AI-focused startups such as OpenAI, maker of ChatGPT. The White House benefited from that heft in the industry earlier this year when it secured commitments from those companies to implement safety mechanisms when building new AI models.
But the White House also faced significant pressure from Democratic allies, including labor and civil rights groups, to ensure that its policies reflected their concerns about the real harms of AI.
The American Civil Liberties Union is among the groups that have met with the White House to try to ensure that “we hold the tech industry and the tech billionaires accountable” so that algorithmic tools “work for all of us, not just for a few,” said ReNika Moore, director of the ACLU’s Racial Justice Program.
Suresh Venkatasubramanian, a former Biden administration official who helped establish principles for addressing AI, said one of the biggest challenges within the federal government is what to do about law enforcement’s use of AI tools. also at the American borders.
“These are all places where we know the use of automation is very problematic, with facial recognition and drone technology,” Mr Venkatasubramanian said. Facial recognition technology has been shown to perform unevenly across racial groups and has been linked to wrongful arrests.
This story was reported by The Associated Press.