Top AI researcher says AI will end humanity and we should stop developing it now – but don’t worry, Elon Musk disagrees

Here’s something cheerful to think about the next time you use an AI tool. Most people involved in artificial intelligence think it could end humanity. That’s the bad news. The good news is that the likelihood of this happening varies greatly depending on who you listen to.

p(doom) is the ‘probability of doom’ or the chance that AI will take over the planet or do something to destroy us, such as creating a biological weapon or starting a nuclear war. At the happiest end of the p(doom) scale, Yann LeCun, one of the “three godfathers of AI,” who currently works at Meta, puts the chances at

Unfortunately, no one else is even close to being as optimistic. Geoff Hinton, one of the other three godfathers of AI, says there is a 10% chance that AI will wipe us out in the next twenty years, and Yoshua Bengio, the third of the three godfathers of AI, increases this figure to 20 %.

99.999999% chance

At the most pessimistic end of the scale is Roman Yampolskiy, an AI security scientist and director of the Cyber ​​Security Laboratory at the University of Louisville. He thinks this is almost guaranteed to happen. He estimates the chance that AI will wipe out humanity at 99.999999%.

Elon Musk said during a “Great AI Debate” seminar at the four-day Abundance Summit earlier this month: “I think there is a chance that this will end humanity. I probably agree with Geoff Hinton that it’s about 10% or 20%. % or something like that,” before adding, “I think the likely positive scenario outweighs the negative scenario.”

In response, Yampolskiy told Business insider he felt that Musk was, in his opinion, “a bit too conservative” and that we should stop developing the technology now as it would be virtually impossible to control AI once it becomes more advanced.

“I’m not sure why he thinks it’s a good idea to pursue this technology anyway,” Yamploskiy said. “If he (Musk) is concerned about competitors getting there first, it doesn’t matter because uncontrolled superintelligence is just as bad no matter who creates it.”

At the summit, Musk had a solution to prevent AI from wiping out humanity. “Don’t force him to lie, even if the truth is unpleasant,” Musk said. “It’s very important. Don’t let the AI ​​lie.”

If you’re wondering where other AI researchers and forecasters currently rank on the p(doom) scale, you can check out the list here.

More from Ny Breaking

Related Post