Fears the Pentagon was ‘building killer robots in the basement’ led to stricter AI rules, Defense Department claims

Fears that the Pentagon is “building killer robots in the basement” may have led to stricter AI regulations requiring all systems to be approved before deployment.

The Department of Defense (DoD) recently updated its AI rules due to “a lot of confusion about” how it plans to use self-decision machines on the battlefield, according to the deputy assistant secretary of defense.

Michael Horowitz explained at an event this month that the “guideline does not prohibit the development of any system” but will “make it clear what is and is not allowed” and enforce an “obligation to behave responsibly” as the deadly develops autonomous systems. systems.

While the Pentagon believes the changes should ease the public’s minds, some have said they are not “convinced” by the efforts.

Fears that the Pentagon is “building killer robots in the basement” may have led to stricter AI regulations requiring all systems to be approved before deployment. Pictured is the unmanned MARAS vehicle unveiled in 2015

News of the Pentagon’s 2012 “Autonomy in Weapon Systems” update has sparked a debate online with many people saying, “If the Pentagon says they won’t do it, they will.”

Dailymail.com has contacted the Ministry of Defense for comment.

The Defense Department has aggressively pushed to modernize its arsenal with autonomous drones, tanks and other weapons that select and attack a target without human intervention.

Mark Brakel, director of the advocacy group Future of Life Institute (FLI), told DailyMail.com: ‘These weapons pose a huge risk of unintended escalation.’

He explained that AI-powered weapons can misinterpret something, such as a ray of sunshine, and see it as a threat, allowing them to attack foreign powers for no reason.

Brakel said the outcome could be devastating because “without meaningful human control, AI-powered weapons are akin to the Norwegian missile incident (a near-nuclear armageddon) on steroids and they could increase the risk of accidents in hotspots like the Taiwan Strait.” enlarge.’

The Department of Defense (DoD) recently updated its AI rules due to

The Department of Defense (DoD) recently updated its AI rules due to “a lot of confusion about” how it plans to use self-decision machines on the battlefield

The Ministry of Defense has encouraged global action to monitor AI weapons by calling on other countries to endorse the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, and in November 47 countries endorsed the initiative.

Horowitz said during a Jan. 9 panel that the Department of Defense is very focused when it comes to building public trust in the technology and that the department is committed to complying with international humanitarian law.

“The real test of any new directive should be how many weapons it excludes or significantly changes,” Brakel said.

“We have yet to see any evidence that changes to the directive have a meaningful impact on the way weapons are developed.”

Autonomous military systems are not only being developed by the US, as China and Russia have their own AI-powered arsenals currently being used in two wars.

The Pentagon announced in November that it would launch thousands of AI-based autonomous vehicles by 2026 to keep up with US adversaries.

Humanitarian groups worry that the AI-powered weapons could fall into the wrong hands or inadvertently spark a war by mistaking something like sunlight for an attack

Humanitarian groups worry that the AI-powered weapons could fall into the wrong hands or inadvertently spark a war by mistaking something like sunlight for an attack

The ambitious initiative – called Replicator – aims to “progress the too-slow shift of U.S. military innovation toward platforms that are small, smart, cheap and numerous,” Deputy Secretary of Defense Kathleen Hicks said in August.

“Replicator itself is about a process,” Mr. Horowitz said this month. “It’s about figuring out how… we can move quickly and scale up the key capabilities that we think are important given the national defense strategy.”

Horowitz also said the DoD is highly focused when it comes to building public trust in the technology and that the department is committed to complying with international humanitarian law.

However, other members of FLI are not convinced about the stricter rules and how the Pentagon will pass them.

Anna Hehir, who leads autonomous weapons systems research for FLI, told The Hill: “It’s really a Pandora’s box that we’re starting to see open, and it’s going to be very difficult to go back.

“I would advocate that the Pentagon view the use of AI in military use as comparable to the dawn of the nuclear age,” she added.

‘So this is a new technology that we don’t understand. And if we view this as an arms race, as the Pentagon does, we could be heading for a global catastrophe.”