Bias in AI scales harm. A biased hiring algorithm rejects thousands systematically. A biased loan model denies credit to entire demographic groups. Biased medical algorithms recommend dangerous treatments systematically. At this scale, bias is structural discrimination, not imperfect data.
Bias enters through training data (historical inequalities encoded as features), model architecture (learned proxies for protected attributes), or evaluation metrics that optimize for the wrong thing. A hiring model trained on historical hiring data will inherit the biases of past decision-makers. Worse: most biased systems look accurate on aggregate, hiding systematic harm to minority groups.
Biased AI agents expose deploying organizations to legal liability under Fair Lending Act, Fair Housing Act, and Equal Employment Opportunity laws. The EU AI Act adds regulatory scrutiny. Discovering bias and deploying anyway transforms negligence into intentional discrimination. Organizations that use certified agents with bias testing reduce legal risk significantly.
Is all bias unintentional?
Most bias starts unintentional. But discovering bias and deploying anyway is intentional. A loan model denying credit to zip codes correlated with race is discriminatory, regardless of original intent.
How does historical training data cause bias?
A hiring model trained on historical hiring data inherits the biases of past decision-makers. If past hiring favored men, the model will learn to favor men. If past lending denied credit to certain zip codes, the model will deny credit to those zip codes. Historical bias becomes learned bias.
Can you hide discrimination with proxy variables?
No. Using zip code, address, school name, or other proxies for race or national origin is illegal under Fair Lending Act and similar laws. Courts recognize disparate impact - discrimination through seemingly neutral means. If your model systematically treats demographics differently, it is discriminatory regardless of variables used.
How does Borealis detect bias?
Bias evaluation is incorporated into audit for high-risk agents (hiring, lending, healthcare). Borealis requires evidence of bias testing before certification. Bias findings affect constraint adherence and behavioral consistency scores. Agents cannot score high on trust while showing systematic demographic disparities.