It now seems entirely possible that ChatGPT parent company OpenAI has solved the “superintelligence” problem and is now grappling with the consequences for humanity.
In the wake of OpenAI’s firing and rehiring of co-founder and CEO Sam Altman, revelations about what prompted the move continue to emerge. At the very least, a new report in The Information points to the internal disruption of a major breakthrough in generative AI that could lead to the development of something called “superintelligence” within this decade or sooner.
Superintelligence is, as you might have guessed, intelligence beyond humanity, and the development of AI capable of such intelligence without the proper safeguards is obviously a major red flag.
According to The Information, the breakthrough was spearheaded by OpenAI Chief Scientist (en full of regret board member) Ilya Sutskever.
It allows AI to use cleaner and computer-generated data to solve problems that AI has never seen before. This means that the AI is not trained on many different versions of the same problem, but on information that is not directly related to the problem. Solving problems in this way – usually mathematical or scientific problems – requires reasoning. Right, something we do, not AIs.
OpenAI’s flagship consumer-facing product, ChatGPT (powered by the GPT Large Language Model (LLM)) may seem so smart that it must use reason to formulate its responses. Spend enough time with ChatGPT, however, and you’ll soon realize that it’s just repeating what it’s learned from the vast amounts of data it’s been fed, and that it’s mainly making accurate guesses about how to create sentences that make sense and that of apply to your question. There is no reasoning whatsoever here.
However, the Information claims that this breakthrough – which Altman may have alluded to this during a recent conference appearance, he said, “Personally, I’ve been in the room these past few weeks as we pushed back a kind of veil of ignorance and the frontier of discovery forward,” – sending shockwaves through OpenAI.
Managing the threat
While there’s no sign of super intelligence in ChatGPT at the moment, OpenAI is certainly working to integrate some of this power into, at least, some of its premium products, like GPT-4 Turbo and those GPT’s chatbot agents (and future ‘intelligent agents’). .
Linking superintelligence to the recent actions of the administration, which Sutskever initially supported, could be a tall order. The breakthrough reportedly came months ago and prompted Sutskever and another OpenAI scientist, Jan Leike, to form a new OpenAI research group called Super alignment with the aim of developing superintelligence safeguards.
Yes, you heard that right. The company working to develop superintelligence is at the same time building tools to protect us from superintelligence. Imagine Doctor Frankenstein equipping the villagers with flamethrowers, and you get the idea.
What is not clear from the report is how internal concerns about the rapid development of the superintelligence may have led to Altman’s dismissal. Maybe it doesn’t matter.
As I write this, Altman is on his way back to OpenAI, the board has been redesigned, and the work to build superintelligence – and protect us from it – will continue.
If this is all confusing, I recommend asking ChatGPT to explain it to you.