Sign in
Key Concepts

Hallucinations

When LLMs confidently state things that aren't true, and why it's a fundamental problem

What it is

Hallucination refers to an LLM generating factually incorrect information with apparent confidence. The model doesn't flag its uncertainty, it produces false citations, wrong dates, invented names, and incorrect facts with the same fluency as correct ones.

The root cause is in the training objectives: during pre-training, models are rewarded for predicting the correct next token but not specifically penalized for confident wrong guesses. The model learns to generate plausible-sounding text, and "plausible" doesn't mean "verified."

Hallucination rates vary by domain (better on common knowledge, worse on niche facts), model size, and whether the model has access to tools like web search.

Why it matters

Hallucinations are the #1 practical failure mode you'll encounter in AI products. Any client application needing factual accuracy (legal research, medical information, financial data) must be designed around this. Standard mitigations include RAG (grounding the model in retrieved documents), tool use (letting the model verify via search), and prompt engineering that instructs the model to acknowledge uncertainty.

Related concepts

Resources

AI Hallucinations
youtube.com· Martin Keen's lightboard breakdown of hallucination types, why LLMs "make stuff up," and practical minimization steps. Excellent standalone intro.
10 min
Deep Dive into LLMs like ChatGPT (section: hallucinations, tool use, knowledge/working memory ~1:20:00
youtube.com· Explains hallucinations as fundamental to how LLMs work, they predict statistically likely word sequences, not facts. "Vague recollection" vs. "working memory" framing is very clarifying.
20 min
Tuning Your AI Model to Reduce Hallucinations
mediacenter.ibm.com· Five concrete prompting techniques to reduce hallucinations. Practical and actionable, good follow-up after the "what are they" videos.
8 min
What Are AI Hallucinations?
ibm.com· Covers famous examples (Bard/JWST, lawyer citing fake cases, Meta Galactica), root causes, and mitigation strategies including RAG.
10 min
Hallucination (artificial intelligence)
en.wikipedia.org· Surprisingly good comprehensive reference. Covers the full history, real-world legal consequences (Mata v. Avianca), the debate over terminology ("confabulation" vs. "hallucination"), and current research.
15 min