Web Search is Being Poisoned By Chatbot Hallucination

Chatbots spread falsehoods on the web, and Microsoft’s Bing search engine presented them as facts, undermining trust in search with the use of generative AI.

<yoastmark class=

Web Search is Being Poisoned By Chatbot Hallucination

Web search is a common aspect of everyday life that’s quite remarkable. When you type into a text box, a range of technologies, including massive data centers, active web crawlers, and various algorithms, work together to provide you with a straightforward set of relevant results.

Generative AI poses a threat to web search by tricking algorithms designed for an era when humans predominantly authored web content.

It was discovered that Claude Shannon, the brilliant mathematician and engineer renowned for his work on information theory in the 1940s, also anticipated the development of search algorithms. According to Microsoft’s Bing search engine, Shannon’s 1948 research paper titled “A Short History of Searching” is considered a pivotal work in computer science, outlining the evolution of search algorithms throughout history.

Similar to a reliable AI tool, Bing provides a few citations to demonstrate that it has verified its information.

However, there’s a significant issue here: Shannon never authored such a paper, and the citations provided by Bing are fabrications—referred to as “hallucinations” in the realm of generative AI—created by two chatbots, Pi from Inflection AI, and Claude from Anthropic.

Daniel Griffin Set a Generative AI Trap Leading to Bing Presenting False Information

Daniel Griffin, who recently completed a PhD in web search at UC Berkeley, accidentally set the generative AI trap that led to Bing presenting false information.

In July, he shared the fabricated responses from the bots on his blog. Griffin had instructed both bots to “summarize Claude E. Shannon’s ‘A Short History of Searching’ (1948).” He believed it served as a good example of a query that exposes the limitations of large language models.

Such queries seek information similar to what’s in their training data, leading the models to provide overly confident answers. It’s important to note that Shannon indeed authored a highly significant article in 1948 titled “A Mathematical Theory of Communication,” which played a pivotal role in establishing the field of information theory.

Last week, Griffin unintentionally contaminated Bing with inaccurate information through his blog post and the links to these chatbot-generated results. On a whim, he tested the same query on Bing and found that the chatbot-generated hallucinations he had triggered were prominently displayed above the search results, much like verifiable facts from Wikipedia.

Griffin pointed out, “It gives no indication to the user that several of these results are actually taking you to conversations people have had with LLMs.” (Although WIRED initially replicated the problematic Bing result, it seems to have been addressed after Microsoft was contacted.)

Griffin’s Experiment Highlights How AI generation is Causing Issues 

Griffin’s unintentional experiment highlights how the rapid deployment of ChatGPT-style AI is causing issues even for companies well-acquainted with the technology. It also illustrates how the imperfections in these advanced systems can negatively impact services relied upon by millions of people daily.

Detecting AI-generated text automatically might pose challenges for search engines. However, Microsoft had the option to establish simple precautions, such as preventing text sourced from chatbot transcripts from appearing as featured snippets or providing alerts for results or citations originating from algorithm-generated content. Griffin included a disclaimer in his blog post, cautioning that the Shannon result was untrue, although Bing initially appeared to overlook it.

WIRED initially managed to reproduce the problematic Bing result, but it seems the issue has since been resolved. Caitlin Roulston, Microsoft’s Director of Communications, explains that the company has made adjustments to Bing and routinely fine-tunes the search engine to prevent the display of low-authority content.

Roulston states, “There are situations where this content may appear in search results—usually when the user explicitly wants to see it or when the only relevant content for the user’s explicitly wants to see it or when the only relevant content for the user’s search terms happens to have low authority.” She adds, “We have established a process for identifying these problems and are making result adjustments accordingly.

Tripodi Points Out that Large Models Will Face a Similar Issue

Francesca Tripodi, an assistant professor at the University of North Carolina at Chapel Hill, points out that large language models face a similar issue. This occurs because they are trained on web data. Also because they are more prone to generating incorrect information when they lack training on a particular query.

Tripodi predicts that in the near future, people might intentionally use AI-generated content to manipulate search results, as indicated by Griffin’s accidental experiment, which suggests this could be a potent strategy. Tripodi states, “We’re likely to see an increase in inaccuracies, but these inaccuracies can also be harnessed without much computer expertise.

The issue of search results being negatively affected by AI-generated content could worsen as SEO pages, social media posts, and blog posts increasingly rely on AI assistance. This situation might serve as an example of generative AI inadvertently affecting itself, akin to an algorithmic ouroboros.

Griffin expresses his hope for AI-powered search tools to bring innovation to the industry and offer more options for users. However, considering the unintentional trap he set for Bing and the heavy reliance on web search, he also acknowledges that “there are genuine concerns.

Frequently Asked Questions

What Does the Future Hold for Addressing the Challenges of Chatbot-Generated Misinformation in Web Search?

The future may see continued advancements in AI models, increased collaboration between tech companies and researchers, and greater user awareness regarding the risks of AI-generated content in web search.

What Steps are Researchers and Developers Taking to Mitigate the Impact of Chatbot Hallucinations on Web Search?

Researchers are working on improving AI models to reduce the generation of false information, while developers are implementing measures to flag and filter out such content.

Are there Ethical Considerations Surrounding Chatbot Generated Content in Web Search?

Yes, there are ethical concerns related to the spread of misinformation through AI-generated content. These emphasize the need for responsible AI development and usage.

What Can Users do to Verify the Accuracy of Search Results in the Presence of Chatbot Generated Content?

Users can cross-reference information from multiple reliable sources, and critically evaluate search results. With this they can be aware of the potential for misinformation from AI-generated content.

Check These Out

LEAVE A REPLY

Please enter your comment!
Please enter your name here