An experimental AI Overviews tool that Google launched this month in the US automatically generates summaries of search results to make it easier for users to find the answers they’re looking for.
But in some cases his answers are not only wrong, they are downright bizarre – and raise concerns that they will exacerbate an already serious misinformation problem.
When you text him News agency He asked the AI if there were cats on the moon, and the answer he received was from another planet:
“Yes, astronauts have met cats on the moon, played with them, and cared for them,” the AI tool said.
Without hesitation, he continued: “For example, Neil Armstrong said, ‘One small step for a man’ because it was the step of a cat. Buzz Aldrin also launched cats on the Apollo 11 mission.”
Many social media users described their funny interactions with the AI overview.
Pizza with glue
For example, the tool told one user that geologists recommend eating “at least one small stone per day,” but no more because they can damage the intestines.
Another user suggested putting “non-toxic glue” on the pizza to make it easier for the cheese to stick.
On another occasion, a reporter asked AI Overviews if it could use gasoline to cook spaghetti faster.
“No, but you can use gasoline to make spicy spaghetti.” The system answered He gave the relevant prescription.
These examples seem funny, but in some cases they can have serious consequences.
When Melanie Mitchell, a researcher at the Santa Fe Institute, asked her Google search how many Muslims had served as US presidents, Amnesty International responded with the well-known conspiracy theory that “the United States has only ever had one Muslim president, Barack Hussein Obama.” “.
The summary referred to a book written by historians, but did not mention that Obama was a Muslim – it simply pointed out the incorrect claim.
“I think the AI Overviews feature is irresponsible and should be taken offline,” the researcher told the Associated Press.
A series of errors
Google said it is taking steps to prevent errors that violate its policy, such as the Obama case.
“The vast majority of AI offerings provide high-quality information, with links to dig deeper,” the company said. “The examples are many [λαθών] Which we saw involved unusual questions. “We’ve also seen examples that were fabricated or unable to reproduce.”
The truth is that AI errors are difficult to reproduce, as no one can know how the system arrived at a particular answer.
Producing inaccurate answers, known as “hallucinations,” also remains a serious problem for almost all AI tools.
For a year, Google has been under pressure from investors to offer artificial intelligence applications to compete with Microsoft and OpenAI.
But it seems that in some cases the company rushed to launch products into the market.
In February, Google was forced to stop producing images with Gemini when it emerged that the chatbot was showing people of different races and nationalities even when that was historically inaccurate.
For example, a request for a photo of the Fathers of the United States showing women, blacks and Indians.
In another incident in 2023, Google showed a promotional video of Bard, as Gemini was then called, giving incorrect answers to questions about astronomy.
As a result, Google’s stock price fell by 9% in one day.
“Total alcohol fanatic. Coffee junkie. Amateur twitter evangelist. Wannabe zombie enthusiast.”
More Stories
Is this what the PS5 Pro will look like? (Image)
Finally, Windows 11 24H2 update significantly boosts AMD Ryzen – Windows 11 performance
Heart Surgeon Reveals The 4 Things He ‘Totally Avoids’ In His Life