Definition
Hallucination is when a language model produces fluent, confident-sounding output that is factually incorrect — citing nonexistent papers, inventing API methods, fabricating quotes. Caused by the model optimizing for likely-sounding text rather than truth. Mitigations include grounding via RAG, asking the model to cite specific sources or admit uncertainty, structured output validation, and human review for high-stakes content.
Example
Asked 'What was the closing price of stock XYZ on March 5, 2024?' the model invents a plausible-sounding but fabricated number instead of admitting it has no real-time data.
When to use
Always assume hallucination is possible. Validate any factual claim before publication or downstream action.