The ChatGPT maker is quietly changing the rules to allow the US military to integrate its technology

OpenAI, the maker of ChatGPT, has quietly changed its rules and lifted a ban on the use of the chatbot and its other AI tools for military purposes – and revealed that it is already working with the Ministry of Defense.

Experts have previously raised fears that AI could escalate conflicts around the world thanks to ‘slaughterbots’ that can kill without any human intervention.

The rule change, which took place last Wednesday, removed a sentence stating that the company would not allow the use of models for “activities that pose a high risk of physical harm, including: weapons development, military and warfare.” ‘

A spokesperson for OpenAI told DailyMail.com that the company, which is in talks to raise $100 billion in funding, is working with the Department of Defense on cybersecurity tools built to protect open-source software.

OpenAI, the creator of ChatGPT, has quietly changed its rules and lifted a ban on using the chatbot and its other AI tools for military purposes (stock image)

The spokesperson said: “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property.

“However, there are national security use cases that align with our mission.

“For example, we are already working with the Defense Advanced Research Projects Agency (DARPA) to drive the creation of new cybersecurity tools to secure open source software on which critical infrastructure and industry depend.

“It was not clear whether these useful use cases would have been allowed under ‘military’ under our previous policy. So the purpose of our policy update is to provide clarity and the ability to have these discussions.”

Last year, 60 countries, including the US and China, signed a ‘call to action’ to limit the use of artificial intelligence (AI) for military reasons.

Human rights experts in The Hague pointed out that the ‘call to action’ is not legally binding and does not address concerns such as deadly AI drones or that AI could escalate existing conflicts.

The signatories said they are committed to the development and use of military AI in accordance with “international legal obligations and in a manner that does not undermine international security, stability and accountability.”

Ukraine has used facial recognition and AI-enabled targeting systems in its battle with Russia.

In 2020, Libyan government forces launched an autonomous Turkish Kargu-2 drone that attacked retreating rebel soldiers, the first attack of its kind in history, according to a UN report.

The deadly drone was programmed to attack “without the need for data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” the UN report said.

Anna Makanju, OpenAI’s vice president of global affairs, said in an interview this week that the “blanket” provision was removed to allow for military use cases that the company agrees with.

Makanju told Bloomberg: “Because we previously had a blanket ban on military personnel, a lot of people thought that this would ban a lot of these use cases that people think are very aligned with what we want to see in the world.”

OpenAI’s Sam Altman speaks at Davos (Getty)

The use of AI for military purposes by ‘Big Tech’ organizations has previously caused controversy.

In 2018, thousands of Google employees protested a Pentagon contract – Project Maven – that used the company’s AI tools to analyze drone surveillance footage.

In the wake of the protests, Google did not renew the contract.

Microsoft employees protested a $480 million contract to supply soldiers with augmented reality headsets.

In 2017, technology leaders including Elon Musk wrote to the UN calling for a ban on autonomous weapons, under laws similar to those banning chemical weapons and lasers built to blind people.

The group warned that autonomous weapons threatened to usher in a “third revolution in warfare”: the first two were gunpowder and nuclear weapons.

The experts warned that once the ‘Pandora box’ of fully autonomous weapons is opened, it may be impossible to close it again.

Can AI pilot unmanned aircraft built to pick and kill targets?

In the near future, artificial intelligence will pilot unmanned attack aircraft, says former MI6 agent and author Carlton King.

The benefits of using machine learning to operate attack craft will be very tempting for military leaders.

King says, “The moment you start giving an independent robot machine learning, you start to lose control of it. The temptation will be to say: ‘Let a robot do it all.’

King says drone aircraft are currently flown by pilots in the US and Britain, but military leaders may be tempted to leave humans out of the equation.

King says, “Clearly there will be a move, if there isn’t already, to take that pilot away on the ground, because their responses may not be fast enough, and put that in the hands of an artificial intelligence , who react much faster and make the decision: shoot or not shoot.’

Related Post