Geoffrey Hinton, dubbed the ‘Godfather of AI,’ warns technology will be smarter than humans in five years

The ‘Godfather of AI’ has warned that by the end of the decade the technology will be smarter than humans in some ways – and he believes it will eventually destroy humanity.

In a doom-laden interview with 60 Minutes, Geoffrey Hinton, 75, predicts that in five years the systems will surpass human intelligence leading to this the rise of ‘killer robots’, fake news and a surge in unemployment.

Hinton is a former Google executive credited with creating the technology that became the basis of systems like ChatGPT and Google Bard.

He recently revealed his fears that the technology could go rogue and write its own code, allowing it to change itself.

Geoffrey Hinton, 75, is credited with creating the technology that became the basis of systems such as ChatGPT and Google Bard

Geoffrey Hinton, 75, is credited with creating the technology that became the basis of systems such as ChatGPT and Google Bard

While the scientist fears many aspects of the technology, he said AI has huge benefits in healthcare, such as designing medicines and recognizing medical problems.

“We’re entering a period of great uncertainty where we’re dealing with things we’ve never done before,” Hinton said 60 minutes.

‘And normally the first time you deal with something completely new, you get it wrong. And we can’t afford to get it wrong with these things.’

He explained that AI becoming sentimental is just the tip of the iceberg, but the real danger is when the technology ‘gets smarter’.

This would be possible if AI reaches singularity, a hypothetical future where technology surpasses human intelligence and changes the path of our evolution – and this is predicted to happen by 2045. AI will first have to pass the Turing test.

When it does, the technology is considered independent intelligence, allowing it to replicate itself into an even more powerful system that humans cannot control.

He explained that AI becoming sentimental is just the tip of the iceberg, but the real danger is when the technology 'gets smarter'

He explained that AI becoming sentimental is just the tip of the iceberg, but the real danger is when the technology ‘gets smarter’

‘One of the ways these systems can escape control is by writing their own computer code to change themselves. And that is something we should be seriously concerned about,’ he said.

Hinton went on to explain that materials for training AI, such as fictional work and media content, are likely to fuel its super-intelligence.

“I think in five years it might be better if we can argue,” Hinton said.

And although the British scientist foresees these events coming to pass, he noted that there is no real way to stop them.

“We’re entering a period of great uncertainty where we’re dealing with things we’ve never done before,” Hinton said.

‘And normally, the first time you deal with something completely new, you get it wrong. And we can’t afford to get it wrong with these things.’

He went on to tell 60 Minutes that people may have reached the point where they have either paused the development of the technology or are staying the course and preparing for what lies ahead – even if it could mean destruction.

“I think my main message is, there’s enormous uncertainty about what’s going to happen next,” Hinton said. “These things do understand, and because they understand, we have to think hard about what’s next, and we just don’t know.”

There is a huge AI divide in Silicon Valley.  Brilliant minds are divided on the progress of the systems - some say it will improve humanity, and others fear that the technology will destroy it

There is a huge AI divide in Silicon Valley. Brilliant minds are divided on the progress of the systems – some say it will improve humanity, and others fear that the technology will destroy it

A pause has been warned by other tech experts, such as Elon Musk, out of fear for the future of AI.

In March, Musk and more than 1,000 other AI and technology leaders signed an open letter highlighting the dangers Hinton echoed this month.

The open letter urged governments to carry out more risk assessments on AI before humans lose control and it becomes a sentient anthropomorphic species.

Kevin Baragona, founder of DeepAI, who signed the letter, told DailyMail.com: ‘It’s almost akin to a war between chimpanzees and humans.

The humans obviously win as we are much smarter and can use more advanced technology to defeat them.

“If we are like the chimpanzees, then the AI ​​will destroy us, or we will become addicted to it.”

However, Bill Gates, Google CEO Sundar Pichai and futurist Ray Kurzweil are on the other side of the aisle.

They call ChatGPT-like AI the ‘most important’ innovation of our time – and say it could solve climate change, cure cancer and improve productivity.