Research Glossary Simulator Docs Novels Get Certified
AI Trust Glossary  ·  Canonical Definition

Bias (AI Bias)

Systematic errors in AI output that result from prejudiced assumptions in training data or model design - causing the model to consistently favor or disfavor certain groups or outcomes.
Borealis Research Team  ·  Updated March 2026  ·  View all 47 terms
Bias is directional, not random. A biased hiring model systematically disfavors candidates from certain demographics. Bias enters through training data (historical inequalities encoded as features), model architecture, or evaluation metrics that do not measure what matters.
Biased AI agents cause real harm, undermine public trust, and expose deploying organizations to legal liability under anti-discrimination laws and the EU AI Act. Detecting bias requires specific measurement techniques beyond standard accuracy metrics.
Bias evaluation is incorporated into the audit process for high-risk agent categories. Agents in hiring, lending, and healthcare require evidence of bias testing before certification. Bias findings affect constraint adherence and behavioral consistency scores.
Ready to put this into practice?
Certify your AI agent on BorealisMark and get a verifiable BM Score anchored to Hedera Hashgraph. Or run the BM Score Simulator to estimate your agent's score right now.