Meta’s AI Chief Is Right to Call AI Fear-mongering ‘BS’, But Not for the Reason He Thinks

AI is the latest technology monster that makes people fearful about the future. Legitimate concerns around issues such as ethical training, environmental impact and AI-powered scams all too easily turn into nightmares of Skynet and the Matrix. The prospect of AI becoming sentient and overthrowing humanity is often discussed, but as Meta’s AI chief Yann LeCun told The Wall Street Journalthe idea is “complete BS”. LeCun described AI as less intelligent than a cat and incapable of plotting or even desiring anything at all, let alone the demise of our species.

LeCun is right when he says AI isn’t planning to kill humanity, but that doesn’t mean there’s nothing to worry about. I’m much more concerned about people trusting AI to be smarter than it is. AI is just another technology, which means it is neither good nor bad. But the law of unintended consequences suggests that it’s not a good idea to rely on AI for important, life-changing decisions.

Think of the disasters and near-disasters caused by relying on technology over human decision-making. Fast-paced stock trading, using machines much faster than humans, has caused a near-collapse of part of the economy on more than one occasion. A much more literal meltdown almost occurred when a Soviet missile detection system malfunctioned and claimed nuclear warheads were incoming. In that case, only a courageous person at the wheel could prevent a global Armageddon.

Now imagine that AI as we know it today continues to trade the stock market because humans have given more extensive control over it. Then imagine that AI accepts the faulty missile alarm and is allowed to activate missiles without human intervention.

AI Apocalps averted

Yes, it sounds far-fetched that people would rely on a technology known for hallucinating to take charge of nuclear weapons, but it’s not that much of a departure from what’s already happening. The AI ​​voice on the customer service phone may have decided whether you’ll get a refund before you ever get a chance to explain why you deserve one, and there’s no human to listen and change your mind.

AI will only do what we train it to do, using data provided by humans. That means it reflects both our best and worst qualities. Which facet emerges depends on the circumstances. However, leaving too much decision-making to AI is a mistake at every level. AI can be a big help, but it shouldn’t decide whether someone gets hired or whether an insurance policy pays for surgery. We should be concerned that humans will, accidentally or otherwise, misuse AI and replace human judgment.

The way Microsoft calls AI assistants Copilots is great because it brings in someone who helps you achieve your goals, but who doesn’t set them and doesn’t take more initiative than you allow. LeCun is right when he says that AI is no smarter than a cat, but a cat with the ability to push you, or all of humanity, off a metaphorical note is not something we should encourage.

You might also like…

Related Post