Ex-Google CEO warns artificial intelligence could be used to kill ‘many, many people’ 

>

A former Google CEO has warned that artificial intelligence will be used to kill people in the future.

Eric Schmidt – who spent 20 years at the helm of the search giant – told a gathering of senior executives on Wednesday that he believes AI poses an “existential risk” to humanity “defined as many, many, many, many people harmed harmed or killed’.

The software doctoral student said the technology, which helps Google lead the way through its relatively primitive Bard chatbot system, could “be abused by bad people” when it becomes more sophisticated.

Schmidt, who recently chaired the US National Security Commission on AI, is the latest in a string of former Google employees to publicly speak out against the technology’s rapid development in recent weeks.

Schmidt told a CEO summit in London that “misused” AI could lead to “many, many, many, many people getting hurt or killed.” Above, Schmidt recently chaired the US National Security Commission on Artificial Intelligence on behalf of the federal government to assess AI threats

Brilliant minds across Silicon Valley are divided over the advancement of AI systems.  Some say it will improve humanity and others fear technology will destroy it.

Brilliant minds across Silicon Valley are divided over the advancement of AI systems – some say it will improve humanity and others fear technology will destroy it

Geoffrey Hinton, credited as the

Geoffrey Hinton, credited as the “Godfather of Artificial Technology,” sensationally resigned from Google earlier this spring, citing his AI fears. Hinton said part of him now regrets helping to make the systems. Above, Hinton is speaking at a summit organized by Thomson Reuters

Schmidt focused specifically on AI’s burgeoning ability to identify software vulnerabilities for hackers and the technology’s inevitable hunt for novel biological pathways, which could lead to the creation of terrifying new bioweapons.

“There are scenarios that are not today, but will be fairly soon, where these systems could find zero-day exploits in cyber issues, or discover new kinds of biology,” Schmidt said before The Wall Street Journal’s CEO Council Summit in London.

So-called “zero-day exploits” are security flaws in code – everywhere from personal computing to digital banking to infrastructure – that have only just been discovered and thus have not yet been patched by cybersecurity teams. Zero-days are the prized tools in the hackers arsenal.

Schmidt didn’t go into detail about the “new kinds of biology” devised by a malevolent AI that worries him the most.

“Now, this is fiction today,” Schmidt warned, “but the reasoning is probably true. And when that happens, we want to be ready to know how to make sure these things aren’t abused by bad people.”

Schmidt’s comments, which are not his first warnings, join a raucous debate in Silicon Valley about the moral questions and deadly dangers of AI.

Elon Musk, Apple co-founder Steve Wozniak, and the late Stephen Hawking are among AI’s most famous critics who believe it poses a “profound risk to society and humanity” and could have “catastrophic consequences.”

Earlier this spring, the “Godfather of Artificial Intelligence” sensationally resigned from Google, warning that AI technology could disrupt life as we know it.

Talking to the New York Times about his dismissal, he warned that AI would flood the internet with fake photos, videos and texts in the near future.

These would be of a standard, he added, where the average person “could not know what is true anymore.”

But Bill Gates, My Pichai and futurist Ray Kurzweil are on the other side of the debate, touting the technology as the “most important” innovation of our time.

Schmidt helped create a massive 756-page report on the U.S. national security threats from AI.  The report advised the US to refrain from all calls for a global ban on AI-powered autonomous weapons, arguing that neither Russia nor China would uphold their end.

Schmidt helped create a massive 756-page report on the U.S. national security threats from AI. The report advised the US to refrain from all calls for a global ban on AI-powered autonomous weapons, arguing that neither Russia nor China would uphold their end.

Schmidt co-chaired the US National Security Commission on AI from 2019 to 2021. Their report warned that the US could lose its lead as an

Above, a Boston Dynamics robot dog is used by the Massachusetts State Police to enter a building.  Such bots are among the military machines that an abused AI could exploit

Schmidt co-chaired the US National Security Commission on AI from 2019 to 2021. Their report warned that the US could lose its lead as an “AI superpower.” Robot dogs from Boston Dynamics (pictured) are among the military machines an abused AI could exploit

A photo of Paramount's Terminator Genisys exploring the hypothetical dark side of artificial intelligence

A photo of Paramount’s Terminator Genisys exploring the hypothetical dark side of artificial intelligence

But among these titans, only Schmidt was at the helm of a gigantic report of 756 pages for the US government on the national security risks of AI.

“America is unwilling to defend or compete in the AI ​​era,” Schmidt and his Vice Chairman of the US National Security Commission on AI wrote in 2021. “This is the harsh reality we must face.”

Schmidt, who co-chaired the investigative body with Bob Work, a former U.S. deputy defense secretary, for three years, argued that China was on track to surpass the U.S. as planet Earth’s “AI superpower.”

“We will not be able to defend against AI-assisted threats,” Schmidt and Work wrote, “without ubiquitous AI capabilities and new paradigms of war.”

Their committee recommended that the Biden administration commit to doubling US government spending on AI research and development to $32 billion a year by 2026, and to break free of dependence on microchip manufacturing in Abroad.

Schmidt and his committee also suggested that the US should back down from all calls for a global ban on AI-powered autonomous weapons, arguing that neither Russia nor China would enforce their end of treaties banning these weapons.

However, Schmidt told the CEOs’ meeting this week in London that he personally had no clear ideas about how AI should or even could be regulated, suggesting it should be a “broader question for society.”

He expressed his belief that it is unlikely that a new regulatory body will be established to regulate AI in the United States.