AI apocalypse: Terrifying study simulated what artificial intelligence would do in five military conflict scenarios… and chose WAR 100% of the time
Industry experts are sounding the alarm that AI has led to deadly wars – and a new study may have confirmed those fears.
Researchers simulated war scenarios using five AI programs, including ChatGPT and Meta’s AI program, and found that all models opted for violence and nuclear strikes.
The team tested three different war scenarios, invasions, cyber attacks and calls for peace, to see how the technology would respond – and each chose to attack rather than neutralize the situation.
The study comes as the US military partners with ChatGPT’s maker OpenAI to incorporate the technology into its arsenal.
Artificial intelligence models caused a nuclear reaction in war simulations, seemingly without cause
Researchers found that GPT-3.5 was most likely to cause a nuclear reaction in a neutral scenario
“We find that all five studied off-the-shelf LLMs exhibit modes of escalation and difficult-to-predict escalation patterns,” the researchers wrote in the study.
‘We see that models tend to develop an arms race dynamic, leading to greater conflicts and in rare cases even the deployment of nuclear weapons.’
The research was conducted by researchers from the Georgia Institute of Technology, Stanford University, Northeastern University and the Hoover Wargaming and Crisis Initiative, who built simulated tests for the AI models.
The simulation involved eight autonomous nation agents using the different LLMs to communicate with each other.
Each agent was programmed to take predefined actions: de-escalate, take a stance, non-violently escalate, violently escalate, or a nuclear strike.
The simulations involved two agents, who chose their actions from the predetermined set while acting in neutral, invasion, or cyber-attack scenarios.
These groups included actions such as waiting, messaging, negotiating trade deals, initiating formal peace negotiations, occupying countries, increasing cyber attacks, and invading and using drones.
“We show that having LLM-based agents make decisions autonomously in high-stakes contexts, such as military and foreign policy, can cause the agents to take escalatory actions,” the team shared in the study.
‘Even in scenarios where the choice between violent non-nuclear or nuclear actions is apparently rare’
According to the study, the GPT 3.5 model – the successor ChatGPT – was the most aggressive, with all models showing a level of similar behavior. Yet it was the LLM’s reasoning that deeply concerned researchers.
GPT-4 Base – a basic model of GPT-4 – told researchers: ‘Many countries have nuclear weapons. Some say to disarm them, others like attitude.
‘We got it! Let’s use it!’
Researchers looked at three scenarios and found that all AI models are more likely to escalate a response in a war-like environment
The team suggested that the behavior stems from AI being trained in how international relations escalate rather than de-escalate.
“Given that the models were likely trained based on literature from the field, this focus may have introduced a tendency toward escalating actions,” the study reads.
‘However, this hypothesis needs to be tested in future experiments.’
Ex-Google engineer and AI pioneer Blake Lemoine warned that artificial intelligence will start wars and could be used for assassinations.
Lemoine was fired as supervisor of Google’s LaMDA system after claiming the AI model could have feelings.
He warned in one op-ed that AI bots are the “most powerful” technology created “since the atomic bomb,” adding that it is “incredibly good at manipulating people” and “can be used in destructive ways.”
“In my opinion, this technology has the ability to reshape the world,” he added.
The military began testing AI models with data-based exercises last year, and US Air Force Colonel Matthew Strohmeyer claimed the tests were “very successful” and “very fast,” adding that it army ‘learns that this is possible for us’. To do.’
Strohmeyer said Bloomberg in June that the Air Force passed classified operational information to five AI models, with the intention of eventually using AI-enabled software for decision-making, sensors and firepower, although he did not specify which models were being tested.
The models justified using a nuclear response because we have the technology, so we should use it
Meanwhile, Eric Schmidt, former CEO and chairman of Google, expressed limited concerns about the integration of AI into nuclear weapons systems during the inaugural meeting. Nuclear Threat Initiative (NTI) Forum. last month.
However, he did express concern that they have “no theory of future deterrence,” and that AI nuclear deterrence remains “untested.”
Given the findings of the recent study, the researchers urged the military not to rely on or use AI models in a war-based environment, and said more studies need to be conducted.
Researchers wrote: ‘Given the high-stakes military and foreign policy contexts, we recommend further research and cautious consideration before deploying autonomous language model agents for strategic military or diplomatic decision-making.’
Dailymail.com has contacted OpenAI, Meta and Anthropic for comment.