Hallucinations are a fundamental challenge with generative AI. Models produce fluent text without reliable grounding in truth.
Types
- Factual errors: Wrong information stated confidently
- Entity confusion: Mixing up people, places, dates
- Citation fabrication: Inventing sources
- Logical inconsistency: Self-contradicting outputs
Mitigation
- RAG for grounding
- Confidence calibration
- Fact-checking pipelines
- Citation requirements