AI Trust Glossary · Canonical Definition
Hallucination
When an AI system generates plausible-sounding but factually incorrect or entirely fabricated content - presented with the same confidence as accurate output.
Explanation
Hallucinations occur because language models predict likely next tokens, not true statements. The model has no mechanism to detect when it is confabulating versus accurately recalling. They are outputs the model confidently generates that happen to be false.
Why it matters
In high-stakes domains (legal, medical, financial), hallucinations can cause direct harm. A medical AI agent that confidently fabricates drug interactions, or a legal agent that cites non-existent case law, represents a trust failure of the highest order.
How Borealis uses it
Hallucinations manifest in the BM Score as constraint violations (if the agent is constrained to factual accuracy), anomalies, and audit completeness failures. High hallucination rates reduce scores across multiple dimensions simultaneously.