Back to glossary
Concepts

Hallucination

When an LLM produces fluent but factually incorrect output.

Definition

Hallucination is when a language model produces fluent, confident-sounding output that is factually incorrect — citing nonexistent papers, inventing API methods, fabricating quotes. Caused by the model optimizing for likely-sounding text rather than truth. Mitigations include grounding via RAG, asking the model to cite specific sources or admit uncertainty, structured output validation, and human review for high-stakes content.

Example

Asked 'What was the closing price of stock XYZ on March 5, 2024?' the model invents a plausible-sounding but fabricated number instead of admitting it has no real-time data.

When to use

Always assume hallucination is possible. Validate any factual claim before publication or downstream action.

Related terms

Free Chrome Extension

Stop rewriting prompts. Start shipping.

Works with ChatGPT, Claude, Gemini, Grok, Midjourney, Ideogram, Veo3 & Kling. 5.0★ on the Chrome Web Store.

Add to Chrome — Free