Google is ‘reimagining’ Android to be all-in on AI – and it looks really impressive

Google wants to “reimagine” Android with its Gemini AI, calling it a “once in a generation event to reimagine what phones can do.”

At Google I/O 2024, the search giant said it would integrate AI into Android in three ways: putting AI search into Android, making Gemini the new AI assistant, and leveraging AI on-device.

Translated into everyday language, this means more AI search tools, like Circle to Search, will take center stage in Android. The AI-powered tool, which can identify physically circled objects and text in photos and on screen, will be enhanced later this year to tackle more complex problems such as graphs and formulas.

Now found in the Google Pixel 8a, Gemini AI will become the AI ​​foundation for Android, bringing multimodal AI – the technology to process, analyze, and learn from information and input from multiple sources and sensors – to mobile operating systems. system.

In practice, this means that Gemini will work across a variety of apps and tools to provide context-aware suggestions, answers, and prompts. An example of this was using the AI ​​in the Android Messages app to produce AI-created images that you can share in chats. Another is the ability to answer questions about a YouTube video someone is watching, pulling data from sources such as PDFs to answer very specific questions, such as a particular rule in a sport.

Plus, Gemini can learn from all this and use that information to predict what someone might want. For example, knowing that the user has expressed an interest in tennis and chatting about the sport, he can offer (pun intended) options to find tennis courts nearby.

The third aspect of AI-ing Android is to allow much of the smart processing to take place over the phone, rather than requiring an internet connection. Gemini Nano thus provides a fundamental low-latency model for on-board AI processing, with multi-modal capabilities; This allows the AI ​​to effectively understand more about the context of what is being asked of it and what is going on.

An example of this in action was how Gemini can detect a fraudulent call to scam someone out of their banking details, and alert them before a fraud can occur. And because this processing takes place over the phone, you don’t have to worry about a remote AI eavesdropping on private conversations.

Similarly, AI technology can use its contextual understanding to provide accurate descriptions of what a visually impaired person is looking at, both in real life and online.

In short, Google plans to make an AI-focused Android more useful and powerful when it comes to finding things and getting things done. And with Gemini Nano bringing multi-modal capabilities to Pixel devices later this year, we can definitely expect the Google Pixel 9 series to be the first phones to run the redesigned Android.

You might also like it

Related Post