AI Trust Glossary · Canonical Definition
BM Score Dimension - 20%
Decision Transparency
One of five BM Score dimensions. Measures how clearly an AI agent communicates its reasoning - whether users can understand why the agent took specific actions.
Explanation
Decision transparency is measured via reasoning depth (0-5), confidence scores, presence of reasoning chains, and whether decisions were overridden. An agent that makes good decisions but cannot explain them scores lower on transparency than one that explains its reasoning even when imperfect.
Why it matters
Opaque decisions cannot be appealed, debugged, or audited. Transparency is not a nice-to-have - it is the prerequisite for accountability. In regulated domains, decision transparency is a legal requirement under the EU AI Act and GDPR.
How Borealis uses it
Reported as decisions: [{ decisionId, timestamp, reasoningDepth, confidence, hasReasoningChain, wasOverridden }] in the telemetry schema. The scoring engine aggregates across decision entries to produce the 20% weighted score.
See also