Resources

AI Governance Intelligence

Regulatory timelines, insurance market signals, and the context you need to make informed decisions about AI governance certification.

EU AI Act — Key Dates

The EU AI Act is the world's first comprehensive AI regulation. The European Parliament and Council have aligned on the Digital Omnibus extension, pushing high-risk AI system obligations to fixed future dates. Trilogue negotiations are expected to conclude in May 2026, after which the extended timeline becomes law.

DateMilestoneStatus
Aug 1, 2024EU AI Act entered into forcePassed
Feb 2, 2025Prohibited AI practices ban effectivePassed
Aug 2, 2025GPAI model obligations effective; governance provisions applyPassed
Nov 2, 2026Watermarking / content origin rules (proposed)Proposed
Dec 2, 2027High-risk AI systems obligations — standalone systems (Digital Omnibus)Agreed
Aug 2, 2028High-risk AI systems — embedded in regulated products (Digital Omnibus)Agreed

Where the delay stands right now.

The Council of the EU adopted its position on the Digital Omnibus on March 13, 2026. The European Parliament adopted its position on March 26, 2026. Both co-legislators have aligned on the fixed extended deadlines — December 2, 2027 and August 2, 2028. Trilogue negotiations with the Commission are expected to conclude in May 2026, at which point the amended text will be formally enacted. Until formal adoption, the original August 2, 2026 deadline remains legally binding. Penalties remain unchanged at up to €35 million or 7% of global annual turnover. Organizations that certify now have time to remediate before enforcement — and will be positioned ahead of competitors when the final text is adopted.

The Insurance Signal

Major insurers are actively excluding AI from liability coverage. This isn't a future risk — it's happening now. Independent certification is the governance signal insurers need to move from blanket exclusion to risk-tiered pricing.

Berkley

Introduced an "absolute" AI exclusion from professional liability and D&O coverage — no exceptions, no carve-outs.

Hamilton

Excluded generative AI from professional liability coverage, specifically targeting AI-generated outputs and decisions.

Verisk / ISO

Released standardized AI exclusionary forms in January 2026, providing template language for the entire insurance industry to exclude AI risk.

Lloyd's Market

Carriers following suit with AI-specific exclusions across multiple lines including D&O, E&O, and fiduciary liability.

Swiss Re

Warned about "silent AI" risk — existing policies inadvertently covering AI losses without proper assessment or pricing.

The Bottom Line

Without independent AI governance certification, organizations face uncovered exposure across Directors & Officers, Errors & Omissions, and Fiduciary Liability policies.

Regulatory Frameworks We Certify Against

EU

EU AI Act

The world's first comprehensive AI regulation. Classifies AI systems by risk level and imposes requirements for high-risk systems including risk management, data governance, transparency, human oversight, and conformity assessment. Penalties up to €35M or 7% of global revenue.

US

NIST AI Risk Management Framework

Voluntary U.S. federal framework for managing AI risk across the lifecycle. Four core functions: Govern, Map, Measure, Manage. Increasingly referenced in U.S. government procurement and state-level AI legislation.

International

ISO/IEC 42001

International standard for AI management systems. Establishes organizational-level requirements for responsible development, provision, and use of AI. The first ISO standard specifically for AI governance.

EU

GDPR

General Data Protection Regulation. Governs the processing of personal data across the EU. AI systems that process personal data must comply with GDPR requirements for data minimization, purpose limitation, and individual rights. Penalties up to €20M or 4% of global turnover.

Global

UNESCO AI Ethics Recommendations

Global ethical framework adopted by 193 member states. Establishes principles for fairness, transparency, accountability, privacy, safety, and human oversight in AI development and deployment.

Extended Coverage

Additional Frameworks

The Clause 5 Framework also addresses requirements from ISO 23894 (AI Risk Management), ISO 22989 (AI Concepts), IEEE P7000 series, OECD AI Principles, and emerging U.S. state-level AI regulations including Colorado, Illinois, and Connecticut.

Common Questions

Do we need to be certified before the EU AI Act enforcement date?
You don't need to be certified before enforcement begins, but you need to be compliant. Certification provides independent, verifiable proof of compliance. Organizations that certify early have time to identify and remediate gaps before enforcement creates urgency, regulatory scrutiny, and higher costs.
What happens if we don't pass certification?
You receive a Detailed Findings Report identifying exactly what didn't meet the standard, with severity classifications (Critical, Major, Minor) and specific control references. You remediate on your own timeline using your internal team or external consultants. When ready, you return for a targeted re-assessment scoped only to the specific findings — not a full re-audit. If you remediate within 90 days, the re-assessment fee is just 10% of the original engagement.
Why can't we just handle compliance internally?
Internal compliance teams are valuable, but regulators, insurers, and courts require independent third-party verification. It's the same reason your financial statements get audited by an independent firm even though you have an internal accounting team. Self-attestation doesn't hold up when tested by enforcement actions, litigation, or insurance claims.
How is Clause5afe different from GRC software like Credo AI?
GRC platforms are internal dashboards — self-assessment tools. They help you manage governance workflows internally, but they don't provide independent, third-party verification that a regulator, insurer, or court would accept as proof of compliance. Clause5afe provides the independent certification that sits on top of your internal governance. Think of it this way: a GRC platform is like your internal accounting system. Clause5afe is like the independent auditor.
Does Clause5afe provide consulting or advisory services?
No. Never. This is the most important principle of our business. If we consult AND certify, we're auditing our own advice — the same conflict of interest that destroyed Arthur Andersen and led to Sarbanes-Oxley. We provide a Detailed Findings Report that identifies exactly what needs attention. Your team or external consultants use that report to remediate. Our independence is non-negotiable.
The EU AI Act deadline has been delayed. Should we wait?
No. The delay to December 2027 — now agreed by both the European Parliament and Council — gives you more time to prepare, but it doesn't reduce your obligations, change the penalties, or slow down the insurance market's response. Companies that certify now benefit from lower pre-enforcement pricing, have time to remediate any findings, and will be positioned ahead of competitors when enforcement arrives. The delay is a window of opportunity, not a reason to pause.

Have Questions?

Schedule a discovery call to discuss your organization's AI governance certification requirements.

Schedule Discovery Call →