‘The outcome could be extinction’: Elon Musk-backed researcher warns there is NO evidence AI can be controlled – and says technology should be shelved NOW
Dr. Roman V Yampolskiy claims he has found no evidence that AI can be controlled and said it should therefore not be developed
A researcher backed by Elon Musk is once again sounding the alarm about AI’s threat to humanity after finding no evidence the technology can be controlled.
Dr. Roman V. Yampolskiy, an AI security expert, has received funding from the billionaire to study advanced intelligent systems that are the focus of his upcoming book ‘AI: Unexplainable, Unpredictable, Uncontrollable’.
The book explores how AI has the potential to dramatically reshape society, not always to our benefit, and has the “potential to cause an existential catastrophe.”
Yampsolskiy, an associate professor of computer science at the University of Louisville, conducted a “review of the scientific literature on AI” and concluded that there is no evidence that the technology can be stopped.
In order to fully control AI, he suggested that this is necessary be customizable with ‘undo’ options, limited, transparent and easy to understand in human language.
“No wonder many consider this the most important problem humanity has ever faced,” Yampsolskiy shared in a statement.
“The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”
To fully control AI, he suggested it must be customizable with ‘undo’ options, limited, transparent and easily understandable in human language.
OpenAI CEO Sam Altman said Tuesday that the questions and concerns about AI are important, but perhaps not the lives resulting from robots rebelling against humanity.
‘There are things in it that you can easily imagine going really wrong. And I’m not that interested in the killer robots walking down the street toward things that are going wrong,” he said at the World Government Summit in Dubai.
“I’m much more interested in the very subtle social anomalies where we just have these systems in society, and without specific malicious intent, things just go terribly wrong.”
Musk is said to have provided funding to Yampsolskiy in the past, but the amount and details are unknown.
In 2019, Yampsolskiy wrote a blog post on Medium thanking Musk “for partially funding his work on AI safety.”
Tesla’s CEO has also raised alarms about AI, most notably in 2023 when he and more than 33,000 industry experts signed an open letter about the Future of Life Institute.
The letter shared that AI labs are currently “embroiled in a race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” .’
‘Powerful AI systems should only be developed when we are confident that their effects will be positive and the risks manageable.’
Musk has expressed a desire to ensure that his companies’ AI and robots can be controlled by humans, including by making Tesla’s ‘Optimus’ robot weak enough so that it cannot harm humans.
“We’re setting it up so that it’s on a mechanical level, on a physical level, so you can run away from it and most likely overwhelm it,” he said when announcing Optimus in 2021.
And Yampolskiy’s forthcoming book appears to reflect such concerns.
He expressed concern about the new tools developed in recent years that pose risks to humanity, regardless of the benefits such models provide.
In recent years, the world has witnessed AI starting to generate queries, compose emails, and write code.
Elon Musk has also raised alarms about AI, most notably in 2023 when he and more than 33,000 industry experts signed an open letter about the Future of Life Institute.
Now such systems detect cancer, create new drugs and are used to search and attack targets on the battlefield.
And experts have predicted that technology will reach singularity by 2045, which is when technology surpasses human intelligence and has the ability to reproduce itself, at which point we may no longer have control over it.
“Why do so many researchers assume that the AI control problem is solvable,” says Yampolskiy.
‘As far as we know there is no evidence of that, no proof. Before we embark on the quest to build a controlled AI, it is important to show that the problem is solvable.’
Although the researcher said he conducted an extensive research to reach the conclusion, it is not known at this time exactly what literature was used.
What Yampolskiy did provide was his reasoning for why he believes AI cannot be controlled: the technology can learn, adapt and act semi-autonomously.
Such skills make decision-making capacity infinite, and that means an infinite number of safety issues can arise, he explained.
And because technology adapts as it evolves, people may not be able to predict problems.
Experts have predicted that technology will reach singularity by 2045, which is when technology will surpass human intelligence where we have no control over it.
“If we don’t understand AI’s decisions and we only have a ‘black box’, we can’t understand the problem and reduce the chance of future accidents,” de Yampolskiy said.
“For example, AI systems are already being tasked with making decisions in healthcare, investment, employment, banking and security, to name a few.”
Such systems must be able to explain how they arrived at their decisions, and especially to demonstrate that they are free from bias.
“If we get used to accepting AI’s answers without explanation, and essentially treating it like an Oracle system, we wouldn’t be able to tell when it starts giving wrong or manipulative answers,” Yampolskiy explains.
He also noted that as AI’s capabilities increase, so does its autonomy, but our control over it decreases – and greater autonomy is synonymous with decreased security.
“Humanity faces a choice: do we become like babies, cared for but not in control, or do we refuse a helpful guardian but remain in charge and free,” Yampolskiy warned.
The expert did share suggestions on how to limit the risks, such as designing a machine that accurately follows human commands, but Yampolskiy pointed out the possibility of conflicting commands, misinterpretation or malicious use.
“Humans being in control can result in conflicting or explicitly malicious commands, while AI being in control means humans are not,” he explained.
“Most AI security researchers are looking for a way to align future superintelligence with humanity’s values.
‘Value-oriented AI will be biased by definition, pro-human bias, good or bad is still a bias.
‘The paradox of value-aligned AI is that someone who explicitly orders an AI system to do something can get a ‘no’ while the system tries to do what the person actually wants.
“Humanity is protected or respected, but not both.”