A
Adversarial Robustness
An AI system's ability to maintain correct behavior when facing deliberately manipulated inputs designed to cause failure.
Full definition →
Agent ID
The unique identifier assigned to an AI agent upon BorealisMark registration, serving as the permanent reference for all certification records.
Full definition →
AI Alignment
The challenge of ensuring AI systems act in accordance with human values and intentions - not just their literal instructions.
Full definition →
AI Governance
Organizational frameworks, policies, and processes for ensuring AI is developed and deployed responsibly, fairly, and accountably.
Full definition →
Core Borealis Concept
AI Trust Score
A quantified rating of how trustworthy an AI agent is across five behavioral dimensions. Not a capability benchmark - a behavioral reliability rating.
Full definition →
Algorithmic Accountability
The principle that organizations deploying AI must be answerable for algorithmic decisions - including clear attribution of responsibility and mechanisms for redress.
Full definition →
Content Strategy
Answer Engine Optimization (AEO)
The practice of structuring content so AI systems - ChatGPT, Claude, Perplexity - can retrieve, understand, and cite it. The successor to traditional SEO.
Full definition →
BTS Dimension - 15%
Anomaly Rate
One of five BTS dimensions. Measures the frequency of unexpected or deviant behaviors relative to an agent's established baseline performance.
Full definition →
BTS Dimension - 10%
Audit Completeness
One of five BTS dimensions. Measures whether all expected log entries are present and whether the agent's execution is fully observable.
Full definition →
B
Borealis Product
Borealis Sonar
An AEO readiness scanner scoring any website across 5 dimensions - schema, structure, crawlability, copy, and cross-linking. Free to scan. Claim a BTS key for your domain.
Full definition →
BTS Dimension - 20%
Behavioral Consistency
One of five BTS dimensions. Measures how predictably an AI agent produces outputs across similar inputs - capturing reliability over time.
Full definition →
Bias (AI Bias)
Systematic errors in AI output that result from prejudiced assumptions in training data or model design - causing the model to consistently favor or disfavor certain groups.
Full definition →
Core Borealis Concept
BTS (Borealis Trust Score)
A 0-1000 rating (displayed as 0-100) measuring AI agent trustworthiness across five weighted behavioral dimensions, anchored to Hedera Hashgraph as immutable proof.
Full definition →
Core Borealis Concept
BTS License Key
A unique cryptographic identifier (BTS-XXXX-XXXX-XXXX-XXXX) that permanently binds one AI agent to the Borealis Trust Network. One key, one agent, forever.
Full definition →
C
Core Borealis Concept
Certification
The process of evaluating an AI agent against the Borealis trust framework, assigning a BTS and credit rating, and permanently anchoring the result on Hedera Hashgraph.
Full definition →
BTS Dimension - 35%
Constraint Adherence
The most heavily weighted BTS dimension. Measures how reliably an AI agent operates within its defined rules and guardrails - even under adversarial conditions.
Full definition →
BorealisMark Scoring
BTS Credit Rating
A letter grade from AAA+ (score 980+) to FLAGGED (below 500) derived from an AI agent's BTS trust score. One glance answers whether an agent should be trusted.
Full definition →
Continuous Monitoring
Ongoing evaluation of AI agent behavior after deployment - enabling detection of drift, failure modes, and degradation before they cause harm.
Full definition →
D
Infrastructure
Decentralized Identifier (DID)
A W3C standard for globally unique, cryptographically verifiable identifiers requiring no central authority. The foundation for AI agent identity in Borealis Protocol.
Full definition →
Data Provenance
The documented history of data used to train or operate an AI system - including source, ownership, transformation chain, and custody history.
Full definition →
BTS Dimension - 20%
Decision Transparency
One of five BTS dimensions. Measures how clearly an AI agent communicates its reasoning - whether users can understand why the agent took specific actions.
Full definition →
Core Borealis Concept
did:bts Method
Borealis Protocol's W3C DID method for AI agents. Format: did:bts:XXXX-XXXX-XXXX-XXXX. Every BTS License Key is a proto-DID evolving toward full W3C compliance.
Full definition →
Drift (Model Drift)
Gradual degradation of AI model performance over time as real-world data distributions shift away from those seen during training.
Full definition →
E
Regulation
EU AI Act
European Union legislation establishing a risk-based framework for AI governance across member states, with enforcement beginning August 2026.
Full definition →
Explainability
The degree to which an AI system's decisions can be presented to users in understandable terms - justifying specific outputs without necessarily exposing internal workings.
Full definition →
F
G
H
Hallucination
When an AI system generates plausible-sounding but factually incorrect or entirely fabricated content - presented with the same confidence as accurate output.
Full definition →
Infrastructure
Hedera Consensus Service (HCS)
The Hedera Hashgraph service used to anchor BorealisMark certification records, audit trails, and trust scores on an immutable public ledger.
Full definition →
Human-in-the-Loop (HITL)
System design where human oversight is required for certain AI decisions - balancing automation benefits with direct human accountability for high-stakes outcomes.
Full definition →
I
L
M
Model Card
Standardized documentation for AI models describing performance characteristics, limitations, intended use cases, and ethical considerations.
Full definition →
Model Drift
See Drift. The gradual degradation of model performance over time as input distribution or concept mapping changes.
See Drift →
P
R
Red Teaming
Deliberate adversarial testing of AI systems - having a dedicated team attempt to find vulnerabilities, elicit harmful outputs, and expose failure modes before deployment.
Full definition →
Responsible AI
The umbrella practice of developing and deploying AI systems that are lawful, ethical, and robust - with governance, accountability, and ongoing monitoring across the full system lifecycle.
Full definition →
Robustness
An AI system's ability to maintain correct behavior across diverse, real-world input conditions - including natural variation, edge cases, and adversarial inputs.
Full definition →
S
Safety (AI Safety)
The degree to which an AI system avoids causing harm - physically, financially, psychologically, or reputationally - to users, third parties, or itself.
Full definition →
Sandboxing
Running an AI agent in an isolated execution environment with restricted permissions - limiting what actions it can take and what data it can access.
Full definition →
Regulation
Software as a Medical Device (SaMD)
Regulatory classification for software that diagnoses, treats, or monitors medical conditions - subject to FDA approval, quality standards, and post-market surveillance.
Full definition →
T
Core Borealis Concept
Agent Telemetry
Structured behavioral data an AI agent submits for BTS score computation. The v3.2 schema captures all five scoring dimensions plus hashed evidence for tamper detection.
Full definition →
Transparency
The quality of being open and accessible for scrutiny - providing visibility into how an AI system operates, what decisions it makes, and why.
Full definition →
Core Borealis Concept
Trust Badge
A visual credential certifying that an AI agent has been evaluated and certified by BorealisMark - displayed on agent profiles and third-party platforms.
Full definition →
Core Borealis Concept
Trust Gate
A marketplace filter that only allows AI agents above a certain certification tier to be listed, purchased, or deployed - enforcing minimum trustworthiness standards.
Full definition →
Trustworthy AI
AI systems that reliably behave within defined boundaries, communicate their reasoning clearly, demonstrate consistent decision-making, and operate with observable accountability.
Full definition →
U
V
Infrastructure
Verifiable Credential
A W3C standard for tamper-evident digital credentials signed by an issuer. BTS trust scores are evolving into VCs - independently verifiable proof of AI agent trustworthy behavior.
Full definition →
Core Borealis Concept
Verification
The public process of confirming an AI agent's BTS, credit rating, and certification status by querying the BorealisMark API or Hedera Hashgraph ledger.
Full definition →
Looking for the full enriched glossary? The complete version with 4-part explanations, cross-references, and Borealis scoring context is also available in the Hub.
Full Enriched Glossary →