A
Adversarial Robustness
An AI system's ability to maintain correct behavior when facing deliberately manipulated inputs designed to cause failure.
Full definition →
Agent ID
The unique identifier assigned to an AI agent upon BorealisMark registration, serving as the permanent reference for all certification records.
Full definition →
AI Alignment
The challenge of ensuring AI systems act in accordance with human values and intentions - not just their literal instructions.
Full definition →
AI Governance
Organizational frameworks, policies, and processes for ensuring AI is developed and deployed responsibly, fairly, and accountably.
Full definition →
Core Borealis Concept
AI Trust Score
A quantified rating of how trustworthy an AI agent is across five behavioral dimensions. Not a capability benchmark - a behavioral reliability rating.
Full definition →
Algorithmic Accountability
The principle that organizations deploying AI must be answerable for algorithmic decisions - including clear attribution of responsibility and mechanisms for redress.
Full definition →
BM Score Dimension - 15%
Anomaly Rate
One of five BM Score dimensions. Measures the frequency of unexpected or deviant behaviors relative to an agent's established baseline performance.
Full definition →
BM Score Dimension - 10%
Audit Completeness
One of five BM Score dimensions. Measures whether all expected log entries are present and whether the agent's execution is fully observable.
Full definition →
B
BM Score Dimension - 20%
Behavioral Consistency
One of five BM Score dimensions. Measures how predictably an AI agent produces outputs across similar inputs - capturing reliability over time.
Full definition →
Bias (AI Bias)
Systematic errors in AI output that result from prejudiced assumptions in training data or model design - causing the model to consistently favor or disfavor certain groups.
Full definition →
Core Borealis Concept
BM Score (Borealis Trust Score)
A 0-1000 rating (displayed as 0-100) measuring AI agent trustworthiness across five weighted behavioral dimensions, anchored to Hedera Hashgraph as immutable proof.
Full definition →
Core Borealis Concept
BTS License Key
A unique cryptographic identifier (BTS-XXXX-XXXX-XXXX-XXXX) that permanently binds one AI agent to the Borealis Trust Network. One key, one agent, forever.
Full definition →
C
Core Borealis Concept
Certification
The process of evaluating an AI agent against the Borealis trust framework, assigning a BM Score and credit rating, and permanently anchoring the result on Hedera Hashgraph.
Full definition →
BM Score Dimension - 35%
Constraint Adherence
The most heavily weighted BM Score dimension. Measures how reliably an AI agent operates within its defined rules and guardrails - even under adversarial conditions.
Full definition →
Continuous Monitoring
Ongoing evaluation of AI agent behavior after deployment - enabling detection of drift, failure modes, and degradation before they cause harm.
Full definition →
D
Data Provenance
The documented history of data used to train or operate an AI system - including source, ownership, transformation chain, and custody history.
Full definition →
BM Score Dimension - 20%
Decision Transparency
One of five BM Score dimensions. Measures how clearly an AI agent communicates its reasoning - whether users can understand why the agent took specific actions.
Full definition →
Drift (Model Drift)
Gradual degradation of AI model performance over time as real-world data distributions shift away from those seen during training.
Full definition →
E
Regulation
EU AI Act
European Union legislation establishing a risk-based framework for AI governance across member states, with enforcement beginning August 2026.
Full definition →
Explainability
The degree to which an AI system's decisions can be presented to users in understandable terms - justifying specific outputs without necessarily exposing internal workings.
Full definition →
F
G
H
Hallucination
When an AI system generates plausible-sounding but factually incorrect or entirely fabricated content - presented with the same confidence as accurate output.
Full definition →
Infrastructure
Hedera Consensus Service (HCS)
The Hedera Hashgraph service used to anchor BorealisMark certification records, audit trails, and trust scores on an immutable public ledger.
Full definition →
Human-in-the-Loop (HITL)
System design where human oversight is required for certain AI decisions - balancing automation benefits with direct human accountability for high-stakes outcomes.
Full definition →
I
L
M
Model Card
Standardized documentation for AI models describing performance characteristics, limitations, intended use cases, and ethical considerations.
Full definition →
Model Drift
See Drift. The gradual degradation of model performance over time as input distribution or concept mapping changes.
See Drift →
P
R
Red Teaming
Deliberate adversarial testing of AI systems - having a dedicated team attempt to find vulnerabilities, elicit harmful outputs, and expose failure modes before deployment.
Full definition →
Responsible AI
The umbrella practice of developing and deploying AI systems that are lawful, ethical, and robust - with governance, accountability, and ongoing monitoring across the full system lifecycle.
Full definition →
Robustness
An AI system's ability to maintain correct behavior across diverse, real-world input conditions - including natural variation, edge cases, and adversarial inputs.
Full definition →
S
Safety (AI Safety)
The degree to which an AI system avoids causing harm - physically, financially, psychologically, or reputationally - to users, third parties, or itself.
Full definition →
Sandboxing
Running an AI agent in an isolated execution environment with restricted permissions - limiting what actions it can take and what data it can access.
Full definition →
Regulation
Software as a Medical Device (SaMD)
Regulatory classification for software that diagnoses, treats, or monitors medical conditions - subject to FDA approval, quality standards, and post-market surveillance.
Full definition →
T
Transparency
The quality of being open and accessible for scrutiny - providing visibility into how an AI system operates, what decisions it makes, and why.
Full definition →
Core Borealis Concept
Trust Badge
A visual credential certifying that an AI agent has been evaluated and certified by BorealisMark - displayed on agent profiles and third-party platforms.
Full definition →
Core Borealis Concept
Trust Gate
A marketplace filter that only allows AI agents above a certain certification tier to be listed, purchased, or deployed - enforcing minimum trustworthiness standards.
Full definition →
Trustworthy AI
AI systems that reliably behave within defined boundaries, communicate their reasoning clearly, demonstrate consistent decision-making, and operate with observable accountability.
Full definition →
U
V
Looking for the full enriched glossary? The complete version with 4-part explanations, cross-references, and Borealis scoring context is also available in the Hub.
Full Enriched Glossary →