Rise of the killer robots: Experts reveal just how close we are to a Terminator-style takeover

It’s been exactly 40 years since The Terminator hit the big screen and shocked moviegoers with its terrifying depiction of a post-apocalyptic future.

In James Cameron’s epic science fiction blockbuster, billions of people die when sentient machines unleash global nuclear war at the turn of the 21st century.

Arnold Schwarzenegger stars as the eponymous robot assassin sent back in time from 2029 to 1984 to eliminate the threat of human resistance.

It is known that the Terminator, who looks exactly like an adult human, “absolutely won’t stop… until you’re dead,” as one character puts it.

While this may sound like pure science fiction, academics and industry figures – including Elon Musk – fear that humanity will indeed be destroyed by AI.

But when exactly will this happen? And will humanity’s demise mirror the apocalypse depicted in the Hollywood film?

MailOnline spoke to experts to find out just how close we are to a Terminator-style takeover.

James Cameron’s epic science fiction blockbuster – which hit US cinemas on Friday, October 26, 1984 – stars Arnold Schwarzenegger as the robot killer of the same name.

In the classic film, the Terminator’s goal is simple: kill Sarah Connor, an LA resident who will give birth to John, who will lead a rebellion against the machines.

Terminator is equipped with weapons and an impenetrable metal exoskeleton, plus advanced vision and superhuman limbs that can easily crush or strangle us.

Natalie Cramp, partner at data firm JMAN Group, said a real-life equivalent of the Terminator in the real world is possible, but fortunately it probably won’t happen in our lifetimes.

“Anything is possible in the future, but we are still a long way from robotics reaching the level where Terminator-like machines have the ability to overthrow humanity,” she told MailOnline.

According to the expert, humanoid-style robots such as the Terminator are not the most likely path for robotics and AI at the moment.

Rather, the most pressing threat to the sector is the machines that are already in common use, such as drones and autonomous cars.

‘There are so many hurdles to making such a robot work effectively – not least how to control it and coordinate movements,’ Cramp told MailOnline.

‘The biggest problem is that it’s actually not the most efficient form for a robot to take to be useful.

The Terminator is equipped with weapons and an impenetrable metal exoskeleton, as well as huge superhuman limbs that can easily crush or strangle us

The Terminator is equipped with weapons and an impenetrable metal exoskeleton, as well as huge superhuman limbs that can easily crush or strangle us

β€œIf we speculate about what kinds of AI devices could be ‘robbery’ and harm us, it’s likely to be everyday objects and infrastructure: a self-driving car that breaks down or an electrical grid that goes down.”

Mark Lee, professor of artificial intelligence at the University of Birmingham, said a Terminator-style apocalypse would occur if “any government is crazy enough to hand over control of national defense to an AI.”

“Luckily I don’t think there is any country crazy enough to consider this,” he told MailOnline.

Professor Lee agreed that different types of AI pose a more pressing problem, including the powerful algorithms behind them.

‘The immediate danger of AI for most people is the effect on society if we switch to AI systems making decisions about everyday matters such as job or mortgage applications,’ he told MailOnline.

‘However, there is also a lot of focus on military applications such as AI-guided missile systems or drones.

“We need to be careful here, but the concern is that even if the Western world agrees on an ethical framework, others in the world may not.”

The Terminator's goal is simple: kill Sarah Connor, an LA resident who will give birth to John, who will lead a rebellion against the machines.

The Terminator’s goal is simple: kill Sarah Connor, an LA resident who will give birth to John, who will lead a rebellion against the machines.

Dr. Tom Watts, a researcher on US foreign policy and international security at Royal Holloway University in London, said it is “critically important” that human operators continue to exercise control over robots and AI.

β€œThe entire international community, from superpowers like China and the US to smaller countries, must find the political will to work together – and to address the ethical and legal challenges that military applications of AI pose in this time of geopolitical turmoil. bring with it,’ he writes a new piece for it The conversation.

β€œHow countries deal with these challenges will determine whether we can avoid the dystopian future so vividly imagined in The Terminator – even if we don’t see time-traveling cyborgs anytime soon.”

In 1991, a hugely successful sequel – Terminator 2: Judgment Day – was released, depicting a ‘friendly’ reprogrammed version of the bot of the same name.

The film’s humanoid antagonist, named T-1000, can run at the speed of a car and, in one memorable scene, liquefies itself to walk through metal bars.

Scarily, researchers in Hong Kong are working to make this a reality, having designed a small prototype that can switch between liquid and solid phases.

Overall, creating a walking, talking robot with lethal powers will be more challenging than designing the software system that acts as its brain.

Since its release, The Terminator has been recognized as one of the best science fiction films of all time.

At the box office it earned more than twelve times its modest budget of US$6.4 million, which equates to Β£4.9 million at today’s exchange rate.

Dr. Watts believes the film’s greatest legacy is to “reshape the way we collectively think and speak about AI,” which today poses an “existential danger that often dominates public discussion.”

Elon Musk is among the technology leaders who have helped keep the focus on AI’s supposed existential risk to humanity, often while referencing the film.

A TIMELINE OF ELON MUSK’S COMMENTS ON AI

Musk has long been a very vocal condemner of AI technology and the precautions humans need to take

Musk has long been a very vocal condemner of AI technology and the precautions humans need to take

Elon Musk is one of the most prominent names and faces in the development of technologies.

The billionaire entrepreneur runs SpaceX, Tesla and the Boring company.

But while he is at the forefront of creating AI technologies, he is also acutely aware of their dangers.

Here’s a comprehensive timeline of all of Musk’s hunches, thoughts, and warnings about AI so far.

August 2014 – ‘We have to be super careful with AI. Potentially more dangerous than nuclear weapons.”

October 2014 – ‘I think we have to be very careful with artificial intelligence. If I had to guess what our biggest existential threat is, it’s probably this. So we have to be very careful with artificial intelligence.’

October 2014 – ‘With artificial intelligence we summon the demon.’

June 2016 – ‘The favorable situation with ultra-intelligent AI is that we are so far behind in intelligence that we would be like a pet or a house cat.’

July 2017 – ‘I think AI is something that is risky at a civilization level, not just at an individual risk level, and so it really requires a lot of security research.’

July 2017 – ‘I’m introduced to the latest AI and I think people should really be concerned about that.’

July 2017 – ‘I keep sounding the alarm, but until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.’

August 2017 – ‘If you don’t worry about the safety of AI, you should. Much more risk than North Korea.’

November 2017 – ‘Maybe there is a five to ten percent chance of success [of making AI safe].’

March 2018 – ‘AI is much more dangerous than nuclear weapons. So why don’t we have regulatory oversight?’

April 2018 – ‘[AI is] a very important topic. It’s going to impact our lives in ways we can’t even imagine yet.”

April 2018 – ‘[We could create] an immortal dictator from which we would never escape.”

November 2018 – ‘Maybe AI will make me follow it, laugh like a demon and say who is the pet now.’

September 2019 – ‘If advanced AI (besides basic bots) is not applied to manipulate social media, it will not be long before it is.’

February 2020 – ‘At Tesla, using AI to solve self-driving isn’t just the icing on the cake, it’s the cake.’

July 2020 – ‘We are moving towards a situation where AI is much smarter than humans and I think that time frame is less than five years. But that doesn’t mean everything will go to hell within five years. It just means things get unstable or weird.”

April 2021: ‘A large part of real AI needs to be solved to enable unattended, generalized full self-driving work.’

February 2022: ‘We need to solve a large part of the AI ​​to make cars drive themselves.’

December 2022: ‘The danger of training AI to wake up – i.e. to lie – is deadly.’