Uncategorized

Google Bard content should be fact-checked, recommends current Google VP

In a BBC interview, the vice president of Google UK states the AI meant more for collaborations and creative pursuits.

If you need any more reason to be skeptical of generative AI, look no further than a recent BBC interview with Debbie Weinstein, Vice President of Google UK. She recommends people use Google Search to fact-check content generated by the Bard AI.

Weinstein says in the interview that Bard should be considered more of an “experiment” better suited for “collaboration around problem solving” and “creating new ideas”. It seems like Google didn’t really intend for the AI to be used as a resource for “specific information”. Besides fact-checking any information offered by Bard, she suggests using the thumbs up and thumbs down buttons at the bottom of generated content to give feedback to improve the chatbot. As the BBC points out, Bard’s homepage states “it has limitations and won’t always get it right, but doesn’t repeat Ms. Weinstein’s advice” to double-check results via Google Search.

On one hand, Debbie Weinstein is giving some sound advice. Generative AIs have a massive problem when it comes to getting things right. They hallucinate, meaning that a chatbot may come up with totally false information when generating text that fits a prompt. This issue has even gotten two lawyers from New York in trouble as they used ChatGPT in a case and presenting “fictitious legal research” that the AI cited.

So it’s certainly not a bad idea to double-check whatever Bard says. However, considering these comments are coming from a vice president of the company, it’s a little concerning.

Analysis: So, what’s the point?

The thing is Bard is essentially a fancy search engine. One of its main function is be “a launchpad for curiosity”; a resource for factual information. The main difference between Bard and Google Search is the former is relatively easier to use. It’s a lot more conversational, plus the AI offers important context. Whether Google likes it or not, people are going to be using Bard for looking up stuff. 

What’s particularly strange about Weinstein’s comments is it contradicts with the company’s plans for Bard. During I/O 2023, we saw all the different ways the AI model could enhance Google Search from providing in-depth results on a topic to even creating a fitness plan. Both of these use cases and more require factual information to work. Is Weinstein saying this update is all for naught since it uses Google’s AI tech?

While it’s just one person from Google asserting this on the record (so far), she is a vice president at Google. If you’re not supposed to use the chatbot for important information, then why is  it being added to the search engine as way to further enhance? Why implement something that’s apparently untrustworthy?

It’s a strange statement; one that we hope is not echoed throughout the company. Generative AI is here to stay after all, and it’s important that we trust it to output accurate information. We reached out to the tech giant for comment. This story will be updated at a later time.

TechRadar’s recently updated list of the best AI tools for 2023

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy