OpenAI has a new scale to measure how smart their AI models are getting – which isn’t as reassuring as it should be

OpenAI has developed an internal scale to chart the progress of its large language models as they move toward artificial general intelligence (AGI), according to a report from Bloomberg.

AGI usually means AI with human intelligence and is considered the broad goal for AI developers. In previous references, OpenAI defined AGI as “a highly autonomous system that outperforms humans at the most economically valuable tasks.” That’s a point well beyond current AI capabilities. This new scale is intended to provide a structured framework for tracking progress and establishing benchmarks in that pursuit.

The scale that OpenAI introduced breaks down progress into five levels, or milestones, on the road to AGI. ChatGPT and its rival chatbots are Level 1. OpenAI claims to be on the cusp of reaching Level 2, which would be an AI system capable of matching a human with a PhD in solving basic problems. That could be a reference to GPT-5, which OpenAI CEO Sam Altman has said will be a “significant leap forward.” After Level 2, the levels get progressively more complex. Level 3 would be an AI agent that can perform tasks for you without you being there, while a Level 4 AI would actually come up with new ideas and concepts. At Level 5, the AI ​​would be able to take over tasks not just for an individual, but for entire organizations.

Level up

The level idea makes sense for OpenAI or actually for any developer. A comprehensive framework not only helps OpenAI internally, but can also form a universal standard that can be applied to evaluate other AI models.

Still, AGI won’t happen right away. Earlier comments from Altman and others at OpenAI suggest it could happen within five years, but timelines vary widely among experts. The amount of computing power required, and the financial and technological challenges, are substantial.

That’s on top of the ethical and security questions that AGI raises. There are very real concerns about what AI at that level would mean for society. And OpenAI’s recent moves may not reassure anyone. In May, the company disbanded its security team following the departure of its leader and OpenAI co-founder Ilya Sutskever. Top researcher Jan Leike also stepped down, citing a lack of OpenAI’s security culture. Still, by providing a structured framework, OpenAI aims to establish concrete benchmarks for its models and those of its competitors, and perhaps help us all prepare for what’s to come.

You might also like…

Related Post