5 funny examples of AI chatbots experiencing hallucinations

5 funny examples of AI chatbots experiencing hallucinations

Content Protection by DMCA.com

Often, AI chatbots are like our saviors – helping us draft our messages, refine our essays, or tackle our terrible research. However, these imperfect improvements have created awkward situations by giving some truly confusing feedback.

1. When Google’s AI Overviews encouraged people to put glue on pizza (and more)

Not long after Google’s AI Overviews feature launched in 2024, it started making some odd suggestions. Among the helpful tips it offers is a puzzling suggestion: Add nontoxic glue to your pizza.

This tip caused an uproar on social media. Memes and screenshots started popping up everywhere, and we started to wonder if AI could really replace traditional search engines.

Gemini also falls into the same situation. It recommends eating an ice cube a day, adding gasoline to spicy spaghetti and using dollars to present weight measurements.

Gemini pulled data from every corner of the web without fully understanding the context. It combines obscure research and jokes, presenting them with a level of conviction that would make any expert uncomfortable.

Since then, Google has rolled out several updates, although there are still some features that could further improve AI Overviews. Although absurd suggestions have dropped significantly, previous mistakes serve as a reminder that AI still requires a certain amount of human oversight.

2. ChatGPT embarrasses a lawyer in court

One lawyer’s complete trust in ChatGPT led to a life lesson – and it’s why you shouldn’t rely solely on AI-generated content.

While preparing for a case, attorney Steven Schwartz used a chatbot to research legal precedent. ChatGPT responded with six fabricated case references, complete with names, dates and realistic-sounding quotes. Trusting ChatGPT’s assurance of accuracy, Schwartz submitted fictitious references to the court.

This error was quickly discovered and according to Document Cloud, the court reprimanded Schwartz for relying on “an unreliable source”. In response, the lawyer promised never to do that again – at least without verifying the information.

Many people have also submitted articles that cite completely fabricated studies, believing that ChatGPT cannot lie – especially when it provides full citations and links. However, while tools like ChatGPT can be useful, they still require serious fact-checking, especially in professions where accuracy is paramount.

3. When BlenderBot 3 mocked Zuckerberg

In an ironic twist, Meta’s BlenderBot 3 became “famous” for criticizing its creator, Mark Zuckerberg. BlenderBot 3 accused Zuckerberg of not always following ethical business practices and having bad fashion taste.

Read More  4 functions you can try with this tool!

Business Insider’s Sarah Jackson also tested the chatbot by asking for its thoughts on Zuckerberg, who was described as creepy and manipulative.

BlenderBot 3’s unfiltered responses are both funny and somewhat alarming. It raises the question of whether the bot reflects real analysis or is simply inspired by negative publicity. Either way, the unfiltered comments of this AI chatbot quickly attracted attention.

Meta has discontinued BlenderBot 3 and replaced it with the more refined Meta AI, which probably won’t repeat such controversies.

4. The emotional breakdown of Microsoft Bing Chat

Microsoft Bing Chat (now Copilot) made waves when it started expressing romantic feelings to everyone, most famously in a conversation with New York Times journalist Kevin Roose. The AI ​​chatbot that powers Bing Chat declared its love and even asked Roose to end his marriage.

This is not an isolated incident – Reddit users have shared similar stories of chatbots showing romantic interest in them. To some people, it’s funny; for others, it’s disturbing. Many people joke that AI seems to have a richer love life than them.

Besides romantic declarations, chatbots also exhibit other strangely human-like behaviors, blurring the lines between entertaining and annoying. Its strange, outrageous statements will always be among AI’s most memorable and strangest moments.

5. Google Bard’s difficult start with space events

When Google launched Bard (now Gemini) in early 2023, the AI ​​chatbot made some serious mistakes, especially in the field of space exploration. One notable mistake was Bard confidently making inaccurate claims about discoveries by the James Webb Space Telescope, prompting NASA scientists to publicly correct him.

This is not an isolated case. There were many inaccuracies in the initial launch of the chatbot, which seemed to be consistent with the general perception of Bard at the time. These early missteps sparked criticism that Google rushed Bard’s launch. This view seemed to be borne out when Alphabet’s stock plummeted by about $100 billion soon after.

While Gemini has made significant strides since then, its rocky launch serves as a cautionary tale about the risks of AI illusion in real-life situations.

Chau Pham - expert in digital marketing since 2015. I build marketing apps & cover marketing topics.

Post Comment