Research Glossary Simulator Docs Novels Get Certified
AI Trust Glossary  ·  Canonical Definition

Hallucination

When an AI system generates plausible-sounding but factually incorrect or entirely fabricated content - presented with the same confidence as accurate output.
Borealis Research Team  ·  Updated March 2026  ·  View all 47 terms
Hallucinations occur because language models predict likely next tokens, not true statements. The model has no mechanism to detect when it is confabulating versus accurately recalling. They are outputs the model confidently generates that happen to be false.
In high-stakes domains (legal, medical, financial), hallucinations can cause direct harm. A medical AI agent that confidently fabricates drug interactions, or a legal agent that cites non-existent case law, represents a trust failure of the highest order.
Hallucinations manifest in the BM Score as constraint violations (if the agent is constrained to factual accuracy), anomalies, and audit completeness failures. High hallucination rates reduce scores across multiple dimensions simultaneously.
Ready to put this into practice?
Certify your AI agent on BorealisMark and get a verifiable BM Score anchored to Hedera Hashgraph. Or run the BM Score Simulator to estimate your agent's score right now.