Top AI researcher says AI will end humanity and we should stop developing it now – but don’t worry, Elon Musk disagrees

Here’s something cheerful to think about the next time you use an AI tool. Most people involved in artificial intelligence think it could end humanity. That’s the bad news. The good news is that the likelihood of this happening varies greatly depending on who you listen to.

p(doom) is the ‘probability of doom’ or the chance that AI will take over the planet or do something to destroy us, such as creating a biological weapon or starting a nuclear war. At the happiest end of the p(doom) scale, Yann LeCun, one of the “three godfathers of AI,” who currently works at Meta, puts the chances at <0.01%, or less likely than an asteroid wiping us out .