Don't you hate it when the godfathers disagree?
On the one hand, former Google scientist Dr. Geoffrey Hinton that we are moving too fast and AI could ruin everything from jobs to truth. On the other side we find Meta's Yann LeCun.
Both scientists once worked together on developments in Deep Learning that would change the world of AI and sparked the flurry of developments in AI algorithms and large language models that brought us to this fraught moment.
Hinton warned earlier this year The New York Times. Fellow Turing Award winner LeCun has largely contradicted Hinton and defended AI development an extensive interview with Wired's Steve Levy.
“People are exploiting fear of the technology, and we risk driving people away from it,” LeCun told Levy.
LeCun's argument, which in its TLDR form says something about, “Don't worry, embrace AI,” breaks down into a few key components that may or may not make you think differently.
Opening is good
I especially enjoyed LeCun's open source argument. He told Levy that if you accept that AI will eventually come between us and much of our digital experience, then there is no point in a few AI powerhouses controlling it. “You don't want that AI system controlled by a small number of companies on the West Coast of the US,” LeCun said.
This is a man who works as Meta's Chief AI Scientist. Meta (formerly Facebook) is a large West Coast company (which recently launched its own open-source LLM LLAMA 2). I'm sure the irony is not lost on LeCun, but I think he may be targeting OpenAI. The world's largest AI vendor (creator of ChatGPT and DALL-E, and a key contributor to Microsoft's CoPilot) started as an open and non-profit company. It now gets a lot of funding from Microsoft (also a big company on the West Coast) and LeCun claims that OpenAI is no longer sharing its research.
Regulation is probably not the intention
LeCun has been vocal on the topic of AI regulation, but perhaps not in the way you think. He is actually against it. When Levy asked about all the damage that an unregulated and all-powerful AI could do, LeCun emphasized that not only are AIs built with guardrails, but if these tools are used in the industry, they will have to follow pre-existing and strict regulations (think: the pharmaceutical industry).
“The question people are debating is whether it makes sense to regulate AI research and development. And I don't think that's the case,” LeCun told Wired.
AGI is not around
There has been a lot of talk in recent months about the potential of artificial general intelligence (AGI), which may or may not be much like your own intelligence. Some, including OpenAI's Sam Altman, believe this is on the horizon. However, LeCun is not one of them.
He argued that we can't even define AGI because human intelligence is not one thing. He has a point there. My intelligence would in no way compare to that of Einstein or LeCun.
You want AI to be smarter than you
According to LeCun, there's little doubt that AIs will eventually be smarter than humans, but he also notes that they won't have the same motivations as us.
He likens these AI assistants to “super smart people,” adding that working with them could be like working with super smart colleagues.
Even with all that intelligence, LeCun insists that these AIs won't have human-like motivations and drives. Global dominance will not be for them simply because they are smarter than us.
LeCun doesn't dismiss the idea of programming into a drive (a surrogate target), but sees that as “objective-driven AI” and since part of that target could be an impenetrable guardrail, the protections will be baked in.
Do I feel better? Is less regulation, more open source, and a firmer embrace of AI mediation the path to a more secure future? Maybe. LeCun certainly thinks so. I wonder if he's talked to Hinton lately.