Pentagon launches tech to stop AI-powered killing machines from going rogue on the battlefield due to robot-fooling visual ‘noise’

Pentagon officials have raised alarms about “unique classes of vulnerabilities for AI or autonomous systems,” which they hope new research can address.

The program, called Guaranteeing AI Robustness against Deception (GARD), has been tasked since 2022 with identifying how visual data or other electronic signal inputs can be gamed for AI through the calculated introduction of noise.

Computer scientists at one of GARD’s defense contractors have been experimenting with kaleidoscopic patches designed to trick AI-based systems into creating fake IDs.

“By adding noise to an image or a sensor, you may be able to break a downstream machine learning algorithm,” a senior Pentagon official leading the study explained Wednesday.

The news comes as fears that the Pentagon is ‘building killer robots in the basement’ have reportedly led to stricter AI regulations for the US military – requiring all systems to be approved before being deployed.

Computer scientists are there together with defense contractor MITER Corp. managed to create visual noise that an AI mistook for apples on a supermarket shelf, a bag left outside and even people

For example, a bus full of civilians could be misidentified as a tank by an AI if it were tagged with the right

Researchers with the modestly budgeted GARD program have spent $51,000 investigating visual and signal noise tactics since 2022, Pentagon audits show

For example, a bus full of civilians could be misidentified as a tank by an AI if it were tagged with the right “visual noise,” as a ClearanceJobs national security reporter suggested as an example. The Pentagon program has spent $51,000 on research since 2022

“You can sometimes create physically feasible attacks with knowledge of that algorithm,” added that official Matt Turek, deputy director of the Information Innovation Office at the Defense Advanced Research Projects Agency (DARPA).

Technically, it is feasible to “trick” an AI’s algorithm into mission-critical errors, allowing the AI ​​to misidentify a variety of pattern patches or stickers for a real physical object that isn’t actually there.

For example, a bus full of civilians could be wrongly identified as a tank by an AI if it were tagged with the right “visual noise,” according to a national security reporter on the site. Clearance jobs presented as an example.

In short, such cheap and lightweight “noise” tactics could cause vital military AI to misclassify enemy combatants as allies, and vice versa, during a crucial mission.

Researchers with the modestly budgeted GARD program have spent $51,000 on research into visual and signal noise tactics since 2022. Pentagon audits show.

A 2020 MITER study illustrated how visual noise that may appear merely decorative or insignificant to the human eye, such as a 'Magic Eye' poster from the 1990s, can be interpreted as a solid object by AI.  Above, MITER's visual noise causes an AI to see apples

A 2020 MITER study illustrated how visual noise that may appear merely decorative or insignificant to the human eye, such as a ‘Magic Eye’ poster from the 1990s, can be interpreted as a solid object by AI. Above, MITER’s visual noise causes an AI to see apples

US Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities Michael Horowitz explained at an event in January that the new Pentagon guidance

US Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities Michael Horowitz explained at an event in January that the new Pentagon guidance “does not prohibit the development of any (AI) system” but will “make it clear what is and what is not’. Allowed.’ Above, a fictional killer robot from the Terminator film franchise

What is public of their work includes a study from 2019 and 2020 This illustrates how visual noise, which may appear merely decorative or unimportant to the human eye, such as a ‘Magic Eye’ poster from the 1990s, can be interpreted by AI as a solid object.

Computer scientists with defense contractor the MITER Corporation managed to create visual noise that an AI mistook for apples on a supermarket shelf, a bag left outside, and even people.

“Whether it’s physically achievable attacks or noise patterns added to AI systems,” Turek said Wednesday, “the GARD program has built state-of-the-art defenses against them.”

“Some of these tools and capabilities have been made available to CDAO (the Department of Defense’s Chief Digital and AI Office),” Turek said.

The Pentagon established the CDAO in 2022; it serves as a hub to facilitate faster adoption of AI and related machine learning technologies within the military.

The Department of Defense (DoD) recently updated its AI rules because there was “a lot of confusion about” how it plans to use self-decision machines on the battlefield, according to US Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities Michael Horowitz

Horowitz explained at an event in January that the “guideline does not prohibit the development of any (AI) system” but will “make it clear what is and is not allowed” and maintain a “commitment to responsible behavior” as it develops. lethal autonomous systems.

While the Pentagon believes the changes should ease the public’s minds, some have said they are not “convinced” by the efforts.

Mark Brakel, director of the advocacy group Future of Life Institute (FLI), told DailyMail.com in January: ‘These weapons pose a huge risk of unintended escalation.’

He explained that AI-powered weapons can misinterpret something, such as a ray of sunshine, and perceive it as a threat, allowing foreign powers to attack unprovoked and without deliberate hostile “visual noise.”

Brakel said the outcome could be devastating because “without meaningful human control, AI-powered weapons are akin to the Norwegian missile incident (a near-nuclear armageddon) on steroids and they could increase the risk of accidents in hotspots like the Taiwan Strait.” enlarge.’

Dailymail.com has contacted the Ministry of Defense for comment.

The Defense Department has aggressively pushed to modernize its arsenal with autonomous drones, tanks and other weapons that select and attack a target without human intervention.