Google blames users for wildly inaccurate ‘AI Overview’ outputs
Elon Musk recently said AI would surpass humans by 2025 but Google’s models are so inaccurate they’re being tuned by hand.
A recent artificial intelligence feature called “AI Overview,” unveiled by search monolith Google, has been handing out inaccurate and dangerous summaries in response to user searches and Google doesn’t appear to have an actual fix for the problem.
As of the time of this article’s writing, Google has disabled certain queries for its “AI Overview” feature after it was widely reported that the system was generating erroneous and potentially harmful outputs.
Reports began to circulate throughout the social and news media communities of a user query asking the search engine how to keep cheese on pizza to which the AI system reportedly responded with text indicating that the user should use glue. In another batch of apparent mess ups, the AI system purportedly told users that at least two dogs owned hotels and pointed to a non-existent dog statue as evidence.
Go to Source
Author: Tristan Greene
Related posts:
- Bitcoin may have played a role in Tesla’s decorrelation from Big Tech
- Elon Musk, Cathie Wood sound ‘deflation’ alarm — is Bitcoin at risk of falling below $14K?
- Crypto Twitter will see less exposure on Google due to rate limit slash
- Elon Musk, Mark Zuckerberg and Sam Altman talk AI regs in Washington