Former Google chief warns AI likely to view humans as ‘scum’ who need to be controlled 

Former Google chief warns AI is likely to view humans as “scum” to be controlled

At Google’s secretive R&D wing X, Mo Gawdat, the division’s former chief business officer, regarded the AI ​​they created as his “children.”

Now he regrets being a parent.

“I lived among those machines. I know how intelligent they are,” Gawdat told podcaster Dan Murray Serter this month. “I wish I hadn’t started it.”

Gawdat warns that the language-learning models that train today’s AI only let the machines learn about the human race from the mess we’ve created online. There, the bots probably only see the worst of what humanity has to offer.

“The problem is the negativity bias,” Gawdat said. “Those who are intentionally bad are all in the headlines. They are also the ones who invest more time and effort to get to power.’

Any intelligent AI trained in the controversy-inducing and “anger-baiting” culture of online content – generated by the news and spread through social media – will come to see our species as evil and a threat.

“What are the chances that AI considers us scum these days?” Gawdat said. ‘Very high.’

Mo Gawdat, former chief business officer at Google X, a “serial entrepreneur” and start-up mentor, believes that humanity should be careful about what information we feed into AI

Gawdat says a dystopian scenario like the movie adaptation I, Robot is likely, if humans continue to pursue automated killing machines for warfare.  But he warns that the public is too preoccupied with these scenarios to focus on fixing the culture that will inevitably lead to them.

Gawdat says a dystopian scenario like the film adaptation I robots is likely, if humans continue to pursue automated warfare killing machines. But he warns that the public is too preoccupied with these scenarios to focus on fixing the culture that will inevitably lead to them.

Gawdat, who wrote a book about the future of AI in 2021, Scary smartalso thinks ChatGPT is a red herring whose power is vastly exaggerated by the public and government policy makers.

“Now that ChatGPT is coming, even though ChatGPT is really and honestly not the problem, now everyone is waking up and saying, ‘Panic! Panic! Let’s do something about it,'” Gawdat told Serter on his Secret Leaders podcast.

By focusing on distant apocalyptic scenarios, he said, humanity would fail to address the issues it can now change to ensure a more harmonious future in our inevitable partnership with hyper-intelligent AI.

“Between now and the time when AI can actually generate its own computing power and perform installations on its own through robotic arms and so on, it doesn’t have the agency to do the scenarios you’re talking about here,” Gawdat said. .

“Mankind will decide to create bigger and better data centers, to spend more power on those machines, to take to the streets and protest because they lost their jobs and called the AI ​​“the demon,” he added. please.

‘It is human action, it is humanity that poses the threat.’

Gawdat told Secret Leaders listeners that people should stop worrying about distant possibilities in 2037 or 2040, when AI could decide to “crush humanity like flies.”

But he does believe AI will soon have the “agency to make killing machines,” but only “because humans make them.”

So yes, AI could use that to dictate an agenda like in the movie I robots,” Gawdat said, “but that’s a little far off.”

However, as AI technology advances very quickly, Gawdat warns that even his own expert opinions should be taken with skepticism.

“Any statement about AI today that is forward-looking is false,” he added. ‘Why? Because the shit has already hit the fan.’