Moments after OpenAI, Google or Anthropic release a major upgrade to their AI models, you’ll already see people speculating about the date and features of the next update. And there have been fairly regular updates to fuel those rumors. However, those days may be over, according to a Bloomberg report. All three major AI developers are reportedly struggling to make their next-gen models match their ambitions for improvement over the current crop.
The report claims that OpenAI’s work on the Orion model is not going as well as the company expected. The model does not perform at the level the company strives for, especially when it comes to encryption. Orion may offer no seismic change to GPT-4 compared to how GPT-4 blew GPT-3.5 out of the water. That may be one of the reasons why OpenAI CEO Sam Altman has publicly dismissed rumors about the release date of the Orion model and an upgrade to ChatGPT.
Delays and lower expectations also plague Google and Anthropic. Google’s Gemini development is proceeding slower than hoped, according to Google Bloomberg. Anthropic has already postponed the release of its Claude 3.5 Opus model for similar reasons, despite it being teased earlier this year.
All AI developers face the same ceilings when expanding the capabilities of their model. The biggest one is probably training data. The companies have used huge data sets to train their AI models, but even the internet is not infinite, and even more so when it comes to high-quality data useful for training AI. Finding previously unused, accessible information becomes difficult. This is partly due to the growing awareness and attention to ethical and legal rights to use certain data, but that is only part of the explanation. At some point, there are not enough human examples for the AI models to absorb and improve. Even if the companies find enough raw data, processing it and incorporating it into an AI model is expensive in terms of money and computing power. If the data can’t provide more than small improvements, upgrading the AI model may not be worth the price.
Fuel or fumes?
The report details how OpenAI and its rivals are looking for other ways to upgrade their models, such as Orion after training with human feedback. That’s a slow way to improve an AI model and raises questions about whether AI has reached the limits of rapidly scaling in size and features. Raw computing power and an avalanche of data may not be enough to make AI developers’ dreams come true. They’ll have to get more creative in how they iterate on their models without throwing the entire Internet at it.
For us, we can expect slightly slower releases of new and improved AI features. That might not be a terrible thing if it gives everyone a chance to catch their breath and really dig into the best ways to use all the AI tools that have been released in recent years. There is plenty to discover with ChatGPT-o1. And who knows, maybe this will give OpenAI room to work on releasing the Sora AI video maker, which has been kept very limited despite OpenAI teasing it with a steady stream of demos.