AI is an ‘extinction-level threat’ and the US government should be given new ’emergency powers’ to control the technology, State Department report warns

A new study funded by the US Department of State calls for a temporary ban on the creation of advanced AI that exceeds a certain threshold of computing power.

The technology, the authors claim, poses an “extinction-level threat to the human species.”

The study, commissioned under a $250,000 federal contract, also calls for “defining emergency powers” ​​for the executive branch of the US government “to respond to dangerous and rapidly evolving AI-related incidents” – such as ‘swarm robotics’.

Treating high-value computer chips as international contraband and even monitoring hardware usage are just some of the drastic measures the new study calls for.

The report joins a chorus of voices from industry, government and academia calling for aggressive regulatory attention to the hotly pursued and game-changing, yet socially disruptive, potential of artificial intelligence.

Last July, for example, the United Nations Science and Culture Agency (UNESCO) linked its concerns about AI to equally futuristic concerns about brain chip technology, a la Elon Musk’s Neuralink, warning of “neurosurveillance” violating “mental privacy.”

A new U.S. State Department-funded study from Gladstone AI (above), commissioned under a $250,000 federal contract, calls for defining emergency powers for the executive branch of the U.S. government “to respond to dangerous and rapidly evolving AI-related problems’. incidents’

Gladstone AI’s report puts forward a dystopian scenario in which the machines themselves could decide that humanity is an enemy to be exterminated, a la the Terminator films: ‘if developed using current techniques, they would ( AI) can behave hostile towards humans by default’

Although the new report notes on its first page that the recommendations “do not reflect the views of the U.S. Department of State or the U.S. government,” the authors have been educating the government about AI since 2021.

The study’s authors, a four-person AI consultancy called Gladstone AI, run by brothers Jérémie and Edouard Harris, told TIME that their previous presentations on AI risks were often heard by government officials who were not authorized to act.

That has changed at the U.S. State Department, they told the magazine, because the Office of International Security and Nonproliferation is specifically charged with combating the spread of cataclysmic new weapons.

And the Gladstone AI report pays a lot of attention to ‘armament risk’.

In recent years, Gladstone AI CEO Jérémie Harris (inset) has also presented to the Canadian House of Commons Standing Committee on Industry and Technology (photo)

There is a big AI gap in Silicon Valley. Brilliant minds are divided over the advancement of the systems – some say it will improve humanity, and others fear technology will destroy it

An offensive, advanced AI, they write, “could potentially be used to design and even execute catastrophic biological, chemical, or cyber attacks, or to enable unprecedented weaponized applications in swarm robotics.”

But the report puts forward a second, even more dystopian scenario, which they describe as a ‘loss of control’ AI risk.

There is, they write, “reason to believe that they (weaponized AI) may be uncontrollable if developed using current techniques, and may behave hostile toward humans by default.”

In other words, the machines can decide for themselves that humanity (or part of humanity) is simply an enemy that must be eradicated for good.

Gladstone AI CEO Jérémie Harris also presented similar dire scenarios before hearings held last year, on December 5, 2023, by the Standing Committee on Industry and Technology in Canada’s House of Commons.

“It is no exaggeration to say that the water cooler talk in the cross-border AI security community views AI as a weapon of mass destruction for the foreseeable future,” Harris told Canadian lawmakers.

“Publicly and privately, groundbreaking AI labs are telling us that we can expect AI systems in the coming years to be able to launch catastrophic malware attacks and support bioweapons design, among many other alarming capabilities,” he said. IT World Canada‘s reporting on his comments.

“Our own research,” he said, “suggests this is a reasonable assessment.”

Harris and his co-authors noted in their new State Department report that private sector’s heavily venture-funded AI companies face an incredible “incentive to scale” to beat their competition, more than any counterbalance to ‘incentives to invest in safety or security’.

The only viable way to achieve the breakthrough in their scenario, they advise, is outside cyberspace, through strict regulation of the advanced computer chips used to train AI systems in the real world.

The Gladstone AI report calls non-proliferation work on this hardware the “key requirement to protect long-term global safety and security from AI.”

And it wasn’t a suggestion they made lightly, given the inevitable likelihood of industry protests: “It’s an extremely challenging recommendation to make, and we’ve spent a lot of time looking for ways to propose these kinds of measures.” , they said.

One of the Harris brothers’ co-authors on the new report, former Defense Department official Mark Beall, served as head of the Strategy and Policy Division of the Pentagon’s Joint Artificial Intelligence Center during his years of government service.

Beall appears to be taking urgent action on the threats identified in the new report: the DoD’s former AI strategy chief has since left Gladstone to launch a super PAC focused on the risks of AI.

The PAC, called Americans for AI Safety, launched this Monday with the stated hope of “passing AI safety legislation by the end of 2024.”

Related Post