Research Glossary Simulator Docs Novels Get Certified
AI Trust Glossary  ·  Canonical Definition
Regulation

EU AI Act

European Union legislation establishing a risk-based framework for AI governance across member states, with enforcement beginning August 2026.
Borealis Research Team  ·  Updated March 2026  ·  View all 47 terms
The EU AI Act classifies AI systems by risk: unacceptable risk (banned), high risk (strict requirements - hiring, credit, healthcare, critical infrastructure), limited risk (transparency obligations), and minimal risk (voluntary guidelines). High-risk AI requires conformity assessments, technical documentation, human oversight, and logging.
August 2026 is the enforcement deadline for high-risk AI provisions. Fines reach up to 30M EUR or 6% of global annual revenue. The Act applies to any organization offering AI in the EU market, regardless of headquarters.
BorealisMark certification provides documentation, audit trails, and Hedera-anchored records that directly satisfy EU AI Act conformity assessment requirements. The five BM Score dimensions map to the Act's requirements for robustness, accuracy, transparency, and human oversight.
Ready to put this into practice?
Certify your AI agent on BorealisMark and get a verifiable BM Score anchored to Hedera Hashgraph. Or run the BM Score Simulator to estimate your agent's score right now.