Google Bard content should be fact-checked, recommends the current Google VP

If you need even more reason to be skeptical about generative AI, look no further than a recent BBC interview with Debbie Weinstein, Vice President of Google UK. She recommends people use Google Search to check content generated by the Bard AI.

Weinstein says in the interview that Bard should be viewed more as an “experiment” better suited to “collaboration around problem solving” and “creating new ideas”. It seems that Google had no real intention of using the AI ​​as a source of “specific information”. In addition to checking all the information provided by Bard, she suggests using the thumbs up and thumbs down buttons at the bottom of the generated content to provide feedback to improve the chatbot. As the BBC points out, Bard’s homepage states “it has limitations and won’t always get it right, but it doesn’t take Mrs Weinstein’s advice” to double-check the results via Google Search.

On the one hand, Debbie Weinstein gives good advice. Generative AIs have a huge problem when it comes to getting things right. They hallucinate, which means that a chatbot can come up with completely wrong information when generating text to match a prompt. This matter has even got two New York lawyers into trouble as they used to ChatGPT in a case and presenting “fictitious legal research” that the AI ​​cited.

So it’s certainly not a bad idea to check everything Bard says. However, given that these comments come from a vice president of the company, it’s a bit concerning.

Analysis: So, what’s the point?

The thing is, Bard is essentially a fancy search engine. One of its main functions is to be “a curiosity launch pad”; a source of factual information. The main difference between Bard and Google Search is that the former is relatively more user-friendly. It’s much more conversational, and the AI ​​provides important context. Whether Google likes it or not, people are going to use Bard to look things up.

What’s especially odd about Weinstein’s comments is that it contradicts the company’s plans for Bard. During I/O 2023, we saw all several ways the AI ​​model can improve Google Search from providing in-depth results on a topic to even creating a fitness plan. Both use cases and more require factual information to work. Is Weinstein saying this update is all for naught because it uses Google’s AI technology?

While it’s only one person from Google who officially claims this (so far), she’s a vice president at Google. If you are not supposed to use the chatbot for important information, why add it to the search engine to further improve it? Why implement something that is seemingly unreliable?

It’s a strange statement; one that we hope is not repeated across the company. Generative AI is here to stay, after all, and it’s important that we trust it to deliver accurate information. We’ve reached out to the tech giant for comment. This story will be updated at a later date.

Related Post