株式会社オブライト
AI2026-05-17

Hallucination

Also known as: Hallucination / 幻覚 / AI幻覚 / LLM幻覚

The phenomenon where an LLM confidently generates factually incorrect content — fabricated citations, wrong figures, nonexistent APIs — one of the most significant risks in production LLM deployments.


Overview

Hallucination occurs because LLMs predict statistically plausible token sequences, not factually grounded ones. It is most common with rare facts, recent events, specific figures, and named entities. The model produces hallucinated content confidently, making errors hard to spot without external verification.

Mitigations

Key mitigations include RAG-based grounding on verified sources, CoT reasoning traces that expose intermediate steps, and LLM-as-a-judge pipelines that cross-check outputs. For high-stakes domains (medical, legal, financial) a mandatory human review step in the workflow is strongly recommended.

Related Columns

Related Terms

Feel free to contact us

Contact Us