OpenAI co-founder’s new company promises ‘Safe Superintelligence’ – a laughably impossible dream

Pro tip for anyone looking to name a new company, especially in a field as fraught as AI: don’t settle for the obvious oxymoron. That’s my note to the well-meaning Ilya Sutskever, former chief scientist and co-founder of OpenAI and who is now launching his own artificial intelligence company with an eponymous goal and name: Safe Superintelligence.

While Sutskever isn’t a household name like OpenAI co-founder Sam Altman, he is widely recognized as the man who “solved” superintelligence late last year, a breakthrough that sparked a meltdown at the ChatGPT parent and led to the sudden – but not long lasting – ouster of Altman.

After Altman returned, there were reports that the breakthrough of Super AI, or general AI intelligence, something that could quickly lead to AI intelligence surpassing above-average human intelligence, so panicked the OpenAI board and Sutskever that they were trying to put on the brakes. the whole thing. Altman probably didn’t agree, so he left until cooler heads prevailed.

There is no such thing as ‘safe superintelligence’. Responsible Superintelligence is possible…

In May this year Sutskever announced that he was leaving OpenAI, news that arrived just days after the company unveiled the eerily powerful GPT-4o (remember the one that seems to go into hiding with Scarlett Johannson’s voice?). At the time, Altman expressed sadness over his partner’s departure, and Sutskever said only that he was working on a project that was “meaningful to him.” No one thought he was about to start throwing clay and selling pottery.

The new company, announced on both X (formerly Twitter) and further a new, spare website, that passion project is complete. It’s a direct response to what clearly shocked Ilya at OpenAI. On the site, Sutskever explains that Safe Superintelligence is “our mission, our name and our entire product roadmap because it is our sole focus. Our team, our investors and our business model are all aligned to achieve SSI.”

To achieve this goal, the company will simultaneously pursue super intelligence and security, with an emphasis, it seems, on the former.

I’m guessing Sutskever isn’t much of a pop culture, film, or even necessarily science fiction fan. How else can he or anyone on his team avoid chuckling when he says the company name out loud? He could be forgiven for missing the poorly reviewed 2020 comedy Superintelligence in which, according to IMDB, “…an all-powerful superintelligence chooses to study the average Carol Peters, the fate of the world hangs in the balance. As the AI ​​decides to enslave, save, or destroy humanity, It’s up to Carol to prove that people are worth saving.”

Although the film received a terrible 5.4 rating, it isn’t the only one making dire predictions about humans versus superintelligence. The term has been around for over a decade and while few would deny its potential, the term has never had the full sheen of hope and promise. I came across a 2014 book by Nick Bostrom, Superintelligence: Paths, Dangers, Strategies. Please note that “Hazards” will receive immediate second billing. The book description ponders: “Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save us or destroy us?”

There is hardly a TV show or movie that looks at super-intelligent AI differently. If artificial intelligence is smarter than us, all bets are off. It’s the free-floating fear of culture, letters, and everyone I know.

Reassure us

Not a day goes by that I don’t have a conversation about AI. It’s not just at work where you would expect it. It is with my wife and adult children. It’s at parties and TV tapings. A mixture of excitement and fear is common. No one knows exactly where things are going and most share the lingering fear that AI will surpass human intelligence and ruin first our careers and then all of us. They don’t know the term ‘superintelligence’, but the concept is crystal clear in their minds. It’s not just about AI being smarter than us, it’s about the potential of super intelligence that exists in all the devices we carry in our pockets and use on our desktops.

Dozens of new laptops arrived this week with Microsoft’s Copilot+ baked deep into the silicon. To be clear, it’s nothing close to superintelligence. In fact, the demos I’ve seen provide a fairly limited view that allows you to see the true potential of AI at the system level. But as someone pointed out to me yesterday, if all these AIs get smarter and become aware of each other and of us, especially our weaknesses, what does that mean that they won’t just take control of those systems and our lives?

Imagine a soapbox racing downhill, with no brakes, and with a driver who understands only 80% of the controls, then you get the idea.

As someone who discusses all of this, I can tell you with some certainty that this isn’t going to happen, at least not in my lifetime.

Still, I once thought that general AI or superintelligence might come if I’m a doddering old fool. Now I predict 18 months.

That’s why I find Sutskever’s company name almost comical. The pace of AI development is exponential. Imagine a soapbox racing downhill, with no brakes, and a driver who understands only 80% of the controls, then you get the idea.

There is no such thing as ‘safe superintelligence’. Responsible Superintelligence is possible and if I had been in the room when Sutskever and his team named his company, I would have suggested it. Ultimately, that’s all these AI companies can promise, by acting in a responsible and perhaps even humane manner. That could lead to ‘safer’ superintelligence, but complete security is an illusion at best.

You might also like it

Related Post