A dwindling consumer interest in chatbots caused a drop in AI-sector revenues during the second business quarter of 2024.
A recent research study titled "Larger and more instructable language models become less reliable" in the Nature Scientific Journal revealed that artificially intelligent chatbots are making more mistakes over time as newer models are released.
Lexin Zhou, one of the study's authors, theorized that because AI models are optimized to always provide believable answers, the seemingly correct responses are prioritized and pushed to the end user regardless of accuracy.
These AI hallucinations are self-reinforcing and tend to compound over time — a phenomenon exacerbated by using older large language models to train newer large language models resulting in "model collapse."