AI Trust Glossary · Canonical Definition
Explainability
The degree to which an AI system's decisions can be presented to users in understandable terms - justifying specific outputs without necessarily exposing internal model workings.
Explanation
Explainability focuses on the output side: 'why did you do this.' Interpretability focuses on internal mechanisms: 'how does this work.' A neural network can be explainable (LIME/SHAP feature attributions) without being interpretable (inspectable weights). In regulated domains, explainability is the practically achievable requirement.
Why it matters
The EU AI Act and GDPR's right to explanation require that automated decisions affecting individuals can be explained. Explainability is also a practical debugging tool - unexplainable failures are the hardest to fix.
How Borealis uses it
The decision transparency dimension measures explainability at the decision level. hasReasoningChain and reasoningDepth in the telemetry schema capture whether the agent produced a traceable justification for each decision.