Gemini 2.0 doubles the speed of the AI assistant – and can boost search
Google has unveiled the next version of the Gemini AI model family, starting with the smallest version, Gemini 2.0 Flash. Gemini 2.0 promises faster performance, sharper reasoning, and improved multimodal capabilities, among other upgrades, as it is integrated with Google’s various AI and AI-adjacent services.
The timing for the news may have something to do with a desire to step on OpenAI and Apple’s toes with their respective 12 Days of OpenAI and new Apple Intelligence news this week, especially since Gemini 2.0 is largely built around developer experimentation . Still, there are some immediate opportunities for people on the non-enterprise side. In particular, Gemini Assistant users and those who see AI overviews when using Google Search can get started with Gemini 2.0 Flash.
If you interact with the Gemini AI through the website on a desktop or mobile browser, you can now play with Gemini 2.0 Flash. You can choose it from the list of models in the drop-down menu. The new model will eventually also be on its way to the mobile app. It may not be life-changing, but Gemini 2.0 Flash’s speed in processing and generating content is remarkable. It is much faster than Gemini 1.5 Flash; Google claims the new model will be twice as responsive yet outperform even the more powerful Gemini 1.5 Pro model.
Overviews and future news
Google is also integrating Gemini 2.0 into its AI overview feature. AI Overviews already writes summaries to answer Google searches without having to click on websites. The company boasted that there are more than a billion people who have seen at least one AI overview since the feature debuted and that this has led to engagement with a wider range of resources than usual.
By integrating Gemini 2.0 Flash, AI Overviews has become even better at answering complex, multi-step questions, Google claims. For example, suppose you are stuck on a math problem. You can upload a photo of the equation, and AI Overviews will not only understand it, but also walk you through the solution step by step. The same goes for debugging code. If you describe the problem in a query, the AI overview can provide an explanation for the problem or even write a corrected version for you. It essentially bakes Gemini’s assistant skills into Google Search.
Most of the Gemini 2.0 news revolves around developers, who can access Gemini 2.0 Flash via the Gemini API in Google AI Studio and Vertex AI; there’s also a new Multimodal Live API for those who want to create interactive, real-time experiences such as virtual tutors or AI-powered customer service bots.
Ongoing developer experiments that could lead to changes for consumers will also get a boost from Gemini 2.0, including universal AI assistant Project Astra, browser AI task assistant Project Mariner, and partnerships with game developers to improve the way in-game characters interact with players.
It’s all part of Google’s ongoing efforts to put Gemini in everything. But for now, that just means a faster, better AI assistant that can keep up with, if not outright beat, ChatGPT and other rivals.