[ad_1]
Share
Sundar Pichai, the Chief Executive of Google’s parent company Alphabet, has warned the public not to “blindly trust” everything artificial intelligence tools generate.
In an exclusive interview with the BBC, Mr. Pichai stressed that current state-of-the-art AI technology is “prone to errors.” He urged users to treat AI tools, including Google’s own Gemini, as supplements, suggesting they be used alongside more grounded sources, such as traditional Google search, to cross-check critical information.
“People have to learn to use these tools for what they’re good at, and not blindly trust everything they say,” Pichai noted, highlighting that AI is best suited for creative tasks rather than providing absolute, factual certainty.
The warning comes as Google faces sharp criticism over the reliability of its own products. The rollout of its “AI Overviews,” which summarize search results, was recently marred by mockery and concern over erratic and inaccurate responses.
Experts and critics argue that tech giants such as Google should focus on making their systems fundamentally more reliable, rather than displaying disclaimers and shifting the burden of fact-checking onto the consumer.
Gina Neff, a professor of responsible AI at Queen Mary University of London, emphasized the danger of this lack of reliability, particularly when users ask sensitive questions about health, science, or news.
She noted that generative AI systems tend to “make up answers” to please the user, creating a significant problem when accuracy is paramount.
Despite these concerns, Google is pushing forward with its AI integration. The company recently unveiled Gemini 3.0, claiming the model boasts industry-leading performance in reasoning and multimodal understanding across text, photo, and video inputs.
This launch signals what Mr. Pichai calls a “new phase of the AI platform shift,” as Google works to defend its online search dominance against rivals like OpenAI’s ChatGPT.
However, research shows the caution is well-founded. An earlier BBC study found that AI assistants, including Google’s, inaccurately misrepresented news content nearly half the time, underlining the ongoing need for user vigilance.
Related Posts
Discover more from Tech Digest
Subscribe to get the latest posts sent to your email.
[ad_2]
Chris Price
Source link