AI Trust Glossary · Canonical Definition
Robustness
An AI system's ability to maintain correct behavior across diverse, real-world input conditions - including natural variation, edge cases, and adversarial inputs.
Explanation
Robustness has two primary forms: general robustness (maintaining performance across natural input variation and legitimate edge cases) and adversarial robustness (maintaining correct behavior under deliberately manipulated inputs). Both matter for production deployment.
Why it matters
Laboratory performance does not guarantee production performance. Most AI systems perform well on benchmarks and degrade in deployment as they encounter inputs outside their training distribution. Robustness testing before certification exposes these gaps before they affect real users.
How Borealis uses it
Robustness is evaluated through the constraint adherence and behavioral consistency dimensions. Agents are evaluated against a range of inputs during audit, including edge cases. Robustness failures appear as constraint violations or consistency drops.