Google’s search engine’s latest AI injection will answer voiced questions about images

SAN FRANCISCO– Google is injecting its search engine with more artificial intelligence that will allow people to ask questions about images and occasionally organize an entire page of results, despite the technology’s past offering of misleading information.

The latest changes announced Thursday mark the next step in an AI-driven makeover from Google launched in mid-May when it started responding to some questions with summaries written by the technology at the top of the influential results page. These summaries, called “AI Overviews,” raised fears among publishers that fewer people would click on search links to their websites and sap the traffic needed to sell digital ads that help fund their operations.

Google is addressing some of these lingering concerns by inserting even more links to other websites into its AI Overviews, which has already reduced visits to general news publishers like The New York Times and technology review specialists like TomsGuide.com, according to an analysis released last month by search traffic specialist BrightEdge.

But Google’s decision to pump even more AI into the search engine, which remains the crown jewel of its $2 trillion empire, leaves little doubt that the Mountain View, California-based company will lose its future connects to a technology that will create the biggest shift in the industry since Apple first unveiled the technology. iPhone 17 years ago.

The next phase of Google’s AI evolution builds on the 7-year-old Lens feature that processes questions about objects in an image. The Lens option now generates more than 20 billion searches per month and is especially popular among users aged 18 to 24. That’s a younger demographic that Google is trying to cultivate as it faces competition from AI alternatives powered by ChatGPT And Bewilderment that position themselves as response engines.

Now people can use Lens to ask a question in English about something they’re looking at through a camera lens – as if they were talking about it with a friend – and get search results. Users who have signed up for tests of the new voice search features in Google Labs can also capture video of moving objects, such as fish swimming through the aquarium, while asking a conversational question and getting an answer via an AI overview.

β€œThe whole goal is, can we make search simpler and easier for people to use, and make it more available so that people can search in any way, wherever they are,” said Rajan Patel, vice president of search engines at Google and co-director of Google. founder of the Lens function.

While advances in AI have the potential to make search easier, the technology also sometimes spits out bad information β€” a risk that threatens to damage the credibility of Google’s search engine if the inaccuracies become too common. Google has already had some embarrassing episodes with its AI overviews, including advise people to put glue on pizza and eat rocks. The company accused those missteps in data breaches and online troublemakers who are deliberately trying to steer AI technology in the wrong direction.

Google is now so confident that it has fixed some of its AI’s blind spots that it will rely on the technology to decide what types of information to display on the results page. Despite previous bad culinary advice about pizza and rocks, AI will initially be used to present results for questions in English about recipes and meal ideas entered on mobile devices. The AI-organized results should be split into different groups of clusters, consisting of photos, videos and articles on the topic.