If you’ve used Google Chrome in recent years, you may have noticed that every time you had to come up with a new password or change your existing password for a site or app, it would say a little “Suggest a strong password.” A dialog box would appear – and it looks like it could soon offer AI-powered password suggestions.
An astute software development observer has noted that Google is gearing up to combine this feature with the capabilities of Gemini, its latest major language model (LLM).
The discovery was made by @Leopeva64 on X. They found references to Gemini in patches of Gerrit, a web-based code review system developed by Google and used in the development of Google products such as Android.
These findings appear to be backed up by screenshots that show a glimpse of how Gemini can integrate into Chrome to give you even better password suggestions when you want to create a new password or change a password you’ve previously set.
Google’s Gemini MAY give you suggestions for stronger passwords in the future, these suggestions would be shown when you create a new password or when you change a saved password, this is mentioned in a few patches in Gerrit: https://t.co / 5WWhDn4km0.https://t.co/okjc4cjQ93 pic.twitter.com/7WB2GFrV00April 20, 2024
Guesswork for Gemini
One line of code that caught my attention is that “removing all passwords will disable this feature.” I’m wondering if this does what it says: disable the feature if a user deletes all their passwords, or if this just means all passwords generated by the ‘Suggest Strong Passwords’ feature.
The last screenshot that @Leopeva64 provides is also intriguing as it appears to show the prompt that Google engineers added to get Gemini to generate a suitable password.
This is a very interesting move from Google and could work out well for Chrome users who use the strong password suggestion feature. I’m a little wary of the potential risks associated with this password generation method, similar to the risks encountered with many such methods. LLMs are susceptible to information leaks caused by prompt or injection hacks. These hacks aim to trick the AI models into revealing information that their creators, individuals or organizations may want to keep private, such as a person’s login credentials.
An important safety consideration
That sounds scary, and as far as we know this hasn’t happened yet with a broadly applicable LLM, including Gemini. It’s a theoretical fear, and there are standard password security practices that tech organizations like Google use to prevent data breaches.
These include encryption technologies, which encrypt data so that only authorized parties can access it during multiple stages of the password generation and storage process, and hashing, a one-way data conversion process intended to make reverse engineering data difficult.
You could also use another LLM like ChatGPT to manually generate a strong password, although I feel like Google knows more about how to do this, and I would only advise experimenting with that if you are a professional in the field of software data.
It’s not a bad idea as a proposal and use of AI that could actually be very beneficial to users, but Google will have to put in just as much (if not more) effort to ensure that Gemini is bolted down and thus impenetrable to the outside. attack as much as possible. If it implements this and somehow causes a massive data breach, it will likely damage people’s trust in LLMs and tarnish the reputations of the tech companies, including Google, that defend them.