Chinese and Russian AI with access to NUKES could start WW3 and spark Armageddon: US fears computer miscalculation could see missiles launched at America

Russia and China must ensure that only humans, and never artificial intelligence, gain control of nuclear weapons to avoid a potential doomsday scenario, a senior US official has said.

Washington, London and Paris have all agreed to maintain total human control over nuclear weapons, said State Department arms control official Paul Dean, as a failsafe to prevent any technological problems from dragging humanity into a devastating conflict. to deposit.

Dean, deputy assistant secretary at the Bureau of Arms Control, Deterrence and Stability, yesterday urged Moscow and Beijing to follow suit.

“We think it is an extremely important standard of responsible behavior and we think it is something that would be very welcome in a P5 context,” he said, referring to the five permanent members of the United Nations Security Council.

It comes as regulators warned that AI is facing its ‘Oppenheimer moment’ and are calling on governments to develop legislation limiting its application to military technology before it is too late.

The alarming statement, referring to J. Robert Oppenheimer, who helped invent the atomic bomb in 1945 before advocating for controls on the spread of nuclear weapons, was made Monday at a conference in Vienna, where civilian, military and technology officials from more than 100 countries were present. meeting to discuss the prospect of militarized AI systems.

The Hwasong-18 intercontinental ballistic missile is launched from a secret location in North Korea

Washington, London and Paris have all agreed to maintain total human control over nuclear weapons, said US State Department official Paul Dean, urging Russia and China to follow suit (launch of a Sarmat intercontinental ballistic missile in the photo).

Washington, London and Paris have all agreed to maintain total human control over nuclear weapons, said US State Department official Paul Dean, urging Russia and China to follow suit (launch of a Sarmat intercontinental ballistic missile in the photo).

A Minuteman III intercontinental ballistic missile is pictured in a silo at an undisclosed location in the US.

A Minuteman III intercontinental ballistic missile is pictured in a silo at an undisclosed location in the US.

Although the integration of AI into military hardware is rapidly increasing, the technology is still in its infancy.

But so far, there is no international treaty banning or restricting the development of lethal autonomous weapon systems (LAWS).

“This is the Oppenheimer moment of our generation,” said Austrian Foreign Minister Alexander Schallenberg. ‘Now is the time to agree on international rules and standards.’

During his opening speech at the Conference on Autonomous Weapons Systems in Vienna, Schallenberg described AI as the most important advance in warfare since the invention of gunpowder more than millennia ago.

The only difference was that AI is even more dangerous, he continued.

“In any case, let’s ensure that the most profound and far-reaching decision – who lives and who dies – remains in the hands of people and not machines,” Schallenberg said.

The Austrian minister argued that the world needs this to ‘ensure human control’, with the disturbing trend of military AI software replacing humans in the world decision-making process.

“The world is approaching a tipping point in responding to concerns about autonomous weapons systems, and support for negotiations is reaching unprecedented levels,” said Steve Goose, arms campaigns director at Human Rights Watch.

“The adoption of a strong international treaty on autonomous weapons systems could not be more necessary or urgent.”

There are already examples of AI being used in a military context with deadly consequences.

Earlier this year a report was published +972 magazine cited six Israeli intelligence officers as admitting to using an AI called “Lavender” to classify as many as 37,000 Palestinians as suspected militants – marking these people and their homes as acceptable targets for airstrikes.

Lavender was trained using data from Israeli intelligence’s decades-long surveillance of the Palestinian population, using the digital footprints of known militants as a model for what signal to look for in the noise, the report said.

Meanwhile, Ukraine is developing AI-based drones that can track Russian targets from further away and be more resilient to electronic countermeasures as it attempts to boost its military capabilities as the war rages on.

Deputy Defense Minister Kateryna Chernohorenko said Kiev is developing a new system that can autonomously distinguish, hunt and attack its targets from a distance.

This would make it harder to shoot down or jam the drones, she said, and would reduce the threat of retaliatory attacks to drone pilots.

To date, there is no international treaty banning or restricting the development of lethal autonomous weapon systems (LAWS).

To date, there is no international treaty banning or restricting the development of lethal autonomous weapon systems (LAWS).

Civilian, military and technology leaders from more than 100 countries met in Vienna on Monday to discuss regulatory and legislative approaches to autonomous weapons systems and military AI

Civilian, military and technology leaders from more than 100 countries met in Vienna on Monday to discuss regulatory and legislative approaches to autonomous weapons systems and military AI

A pilot practices with a drone at a training ground in the Kiev region on February 29, 2024, amid the Russian invasion of Ukraine

A pilot practices with a drone at a training ground in the Kiev region on February 29, 2024, amid the Russian invasion of Ukraine

1714639777 191 Chinese and Russian AI with access to NUKES could start

‘Our drones must become more effective and be guided to the target without operators.

‘It has to be based on visual navigation. We also call it ‘last-mile targeting’, tailoring to the image,” she said The Telegraph.

Monday’s conference on LAWS in Vienna came as the Biden administration seeks to deepen separate discussions with China on both nuclear weapons policy and the growth of artificial intelligence.

The spread of AI technology came to light during major talks between US Secretary of State Antony Blinken and Chinese Foreign Minister Wang Yi in Beijing on April 26.

The two sides agreed to hold their first bilateral talks on artificial intelligence in the coming weeks, Blinken said, adding that they would share their views on how best to manage the risks and security surrounding the technology.

As part of normalizing military communications, U.S. and Chinese officials resumed discussions on nuclear weapons in January, but formal arms control negotiations are not expected anytime soon.

China, which is expanding its nuclear weapons capabilities, insisted in February that the major nuclear powers should first negotiate a no-first-use treaty among themselves.