AI Trust Glossary · Canonical Definition
Human-in-the-Loop (HITL)
System design where human oversight is required for certain AI decisions or actions - balancing automation benefits with direct human accountability for high-stakes outcomes.
Explanation
HITL is not all-or-nothing. A well-designed system routes low-risk decisions to automated processing, medium-risk decisions to human review with AI recommendation, and high-risk decisions to human decision-making with AI analysis. The routing logic is itself a governance decision.
Why it matters
The EU AI Act mandates human oversight for high-risk AI systems. HITL is the primary mechanism for meeting this requirement. Humans also catch the failure modes that automated systems are blind to, and establish clear liability attribution.
How Borealis uses it
The MAGISTRATE role in the Borealis audit pipeline is a HITL mechanism. ARBITER agents submit audit evidence; a human MAGISTRATE issues the certification verdict. This prevents fully automated self-certification and ensures human accountability for the final trust determination.
See also