• FryAI
  • Posts
  • Is AI On Drugs?: Sorting Out AI "Hallucinations"

Is AI On Drugs?: Sorting Out AI "Hallucinations"

Welcome to this week’s Deep-Fried Dive with Fry Guy! In these long-form articles, Fry Guy conducts in-depth analyses of cutting-edge artificial intelligence (AI) developments and developers. Today, Fry Guy dives into the discussion about AI hallucinations. We hope you enjoy!

*Notice: We do not receive any monetary compensation from the people and projects we feature in the Sunday Deep-Fried Dives with Fry Guy. We explore these projects and developers solely to showcase interesting and cutting-edge AI developments and uses.*


🤯 MYSTERY LINK 🤯

(The mystery link can lead to ANYTHING AI-related. Tools, memes, and more…)

Made with Grok

Is AI on drugs?

Everyone in the tech world is talking about AI’s potential for “hallucinations.” But what in the world does this mean, why does it happen, and what can we do about it? In this article, we will explore this phenomenon. Hopefully, this will help you see things as they truly are.

WHAT ARE “HALLUCINATIONS”?

Commonly, when we think about hallucinations, we tend to think about the side effects of certain drugs or cognitive conditions which cause us to experience things that aren’t really there. According to the Cleveland Clinic, “a hallucination is a false perception of objects or events involving your senses,” often caused by chemical abnormalities in the brain. These hallucinations can be auditory, such as hearing music that isn’t actually playing, or they can be visual, for instance, such as seeing objects, lights, or animals that are not actually present. As you might imagine, these experiences can be deceiving for individuals, who gain a false sense of reality. When discovered, many are left frustrated, wondering whether they can trust their senses at all.

Like humans, AI has been said to hallucinate. However, the term is slightly different in this context. Unlike humans who experience sensory hallucinations, AI does not have parallel sense perception, and so it does not itself experience any sort of hallucinatory experience like humans do. After all, AI is not a conscious being with a brain (although I’m aware some may beg to differ). Nonetheless, when tech bros talk about AI hallucinations, they are referring to AI’s susceptibility to “perceive patterns or objects that are nonexistent, creating nonsensical or inaccurate outputs.” Informally, an AI hallucination is when AI “makes stuff up.” This can lead to deceptive experiences for human users, who tend to trust AI models to produce reliable information.

AI models have produced some infamous hallucinations over the past few years, confidently reporting inaccurate or invented material. Google’s Gemini model went viral for telling users that a dog has played professional hockey, football, and basketball. On another occasion, it told a user to put glue on their pizza. The potential for hallucinations is not limited to Google, however. Far from being a problem with one company, hallucinations are a problem for AI across the board. OpenAI’s ChatGPT has been found to fabricate facts, misattribute sources for information, and also display a clear bias towards certain social and political issues. These types of outputs, presented in what appears to be an “objective” way, have the potential to deceive the public about various matters. Sometimes, these matters can be about sensitive topics like election information or medical advice.

WHY DO HALLUCINATIONS OCCUR?

If hallucinations are occurring, why don’t tech companies just do something to stop them? Well, it’s not that easy. Hallucinations do not have a quick-fix solution because there are many potential causes for a hallucination. Let’s look at some of these.

  1. Insufficient training data

If the model doesn't have enough data to learn from, it may fabricate or “hallucinate” connections between the data, leading to inaccurate responses to user prompts. This is why it is important for the model to be trained extensively.

  1. Biased training data

If the training data contains biases, the model may produce biased results. For example, imagine a model trained only on CNN, The New York Times, and MSNBC. If this model is asked a question about Donald Trump, it may respond differently than a model trained on content from Fox News and The Daily Wire. Even if developers try to incorporate training data from a diverse set of sources, an inherent bias will always be present in some way, even if mitigated.

  1. Outdated or low-quality training data

If the training data is not up to date or high quality, the model may produce inaccurate results. If the model is only trained on data from social media, for instance, it may take content to be factual that is not. Most developers are trying to overcome this obstacle by paying for high quality content and giving models live access to internet data, but finding a way for models to filter out what data is quality and what is not remains a challenge for development.

  1. Overfitting

Overfitting occurs when a model is trained on a limited dataset, causing it to memorize specific inputs and outputs, making it difficult to generalize new data. Take this example from AWS: “Consider a use case where a machine learning model has to analyze photos and identify the ones that contain dogs in them. If the machine learning model was trained on a data set that contained majority photos showing dogs outside in parks, it may learn to use grass as a feature for classification, and may not recognize a dog inside a room.”

  1. Prompt issues

Many times, when people interact with AI models, they use prompts that contain idioms or slang. Or, they will use incorrect grammatical structures which make it difficult for the model to understand the input. These models may not be trained on such slang terms or mistakes in grammar, and this could throw off the model’s ability to produce an accurate response.

Although there may be more reasons AI models hallucinate, these five features are some of the major reasons. Because it is difficult to control the training data and identify every problem before it occurs, the problem of hallucinations is likely one that will never be solved entirely. However, as developers learn from their mistakes, there may be ways to mitigate these problems and reduce the number and significance of hallucinations.

WHAT SHOULD WE DO?

Due to the tendency of AI to hallucinate, any response from AI might be false. This comes without indication, as the AI model will answer your question as confidently as it does any other. Even though developers are working to reduce this issue, at any given moment, when you ask LLMs like ChatGPT or Claude a question, they could be fabricating the response. So what can we do? Should we throw out AI models as “fake news” machines? Well, not exactly.

Just because AI has the ability to hallucinate does not mean we should disregard everything these models say. For the most part, AI models produce reliable outputs. They can be helpful for providing information about car maintenance, cooking, academic research, and more. Not to mention, these models are becoming more and more convenient to use. In just a few clicks, you have all kinds of information at your fingertips, 24/7. But how do we know when AI is telling us a delicious new recipe or poisoning our food?

In light of hallucinations, it seems we should not treat AI as an infallible, factual resource. We should not take its word as some divinely-inspired truth. Rather, we should view the outputs of AI models like we would view the testimony of a generally reliable human.

Consider an example. If you are trying to find the bathroom in an unfamiliar building and you ask a stranger where it is, they may confidently give you directions. They might say, “The bathroom is down those stairs and on your left.” It might be the case that they are lying to you, or that they are mistaken. But generally, you have good reason to trust the individual without thinking too much about it. If it turns out they are correct, you aren’t surprised. If they are wrong, you may be slightly frustrated, but that does not mean you didn’t have good reason to trust their word. In this regard, maybe trusting AI is something like trusting the testimony of a stranger.

Consider another example. You are figuring out whether you should get shoulder surgery. You tell a stranger your symptoms, and he tells you that you need surgery. Likely, you won’t take the stranger at his word. Because this is an important matter, you might want to consult a specialized doctor and/or get multiple opinions. What this example is meant to draw out is how we should trust the testimony of AI about more sensitive matters. As AI models perform right now, it seems we can generally trust AI with low-stakes information. However, as the stakes increase, so should the skepticism about AI’s outputs.

At the end of the day, when you approach AI models, it’s important to recognize their limitations. When AI renders a response to your question, remember that the model may be biased, and that it may be lying straight to your face. If you’re not careful, you may be seeing things that aren’t truly there.

Did you enjoy today's article?

Login or Subscribe to participate in polls.