LLM Hallucination

LLM hallucination refers to a situation where a large language model (LLM) generates information that appears plausible but is factually incorrect, misleading, or entirely fabricated. These hallucinations often occur when the model lacks sufficient data, misinterprets the context, or tries to confidently fill in gaps with guesses.

Also known as: AI hallucination, model fabrication, synthetic error

Comparisons

  • LLM Hallucination vs. Typo or Bug: A hallucination is a logical, fluent output that is wrong in meaning, not a coding error or spelling mistake.
  • LLM Hallucination vs. Bias: Bias reflects skewed viewpoints in the training data, while hallucination is about making things up that aren't true or real.

Pros

  • None — hallucinations are considered undesirable and can undermine trust in the model’s output.

Cons

  • Misinformation: Can spread false or misleading content
  • Reduced reliability: Impacts decision-making in sensitive domains like healthcare or law
  • Difficult to detect: Confident tone can obscure factual errors

Example

An LLM might respond to a query about a scientific paper by citing a source that sounds legitimate but does not actually exist. For instance, it could fabricate a research study or quote a journal article that was never published, making it seem credible due to natural-sounding language.

© 2018-2025 decodo.com. All Rights Reserved