AI-governance Published: Updated: By

When does the EU AI Act take effect?

Quick answer

The EU AI Act entered into force on 1 August 2024. Prohibitions on unacceptable-risk AI applied from 2 February 2025. Most high-risk AI obligations, including those covering credit scoring and fraud-related AI, apply from 2 August 2026. High-risk systems already deployed before that date get until 2 August 2027. ---

The full answer

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024, 20 days after publication in the Official Journal of the EU. Obligations don't all hit at once; they phase in over 36 months.

Date What applies
2 February 2025 Prohibited AI systems (Article 5)
2 August 2025 GPAI model rules (Chapter V); governance and notifying authority provisions
2 August 2026 Full high-risk AI obligations (Annex III)
2 August 2027 Transitional period ends for high-risk AI already in service before August 2026

The prohibition date (February 2025) has passed. It covers social scoring by public bodies and real-time remote biometric identification in public spaces, with narrow law enforcement exceptions. Most banks don't run these systems, but it's worth confirming.

The 2 August 2026 date is the one compliance teams should focus on. Annex III designates the following as high-risk: creditworthiness assessment (point 5(b)), insurance risk pricing for life and health products (point 5(c)), biometric categorization systems, and AI in employment and recruitment. If your institution uses an AI model to score loan applications, assign customer risk tiers, or flag transactions for further review in ways that affect access to services, you're almost certainly in Annex III territory.

For AI systems already deployed before 2 August 2026, there's a transitional period until 2 August 2027, but only if the system hasn't undergone significant changes. The European AI Office is expected to issue guidance on what counts as significant. Don't assume a software update qualifies as no significant change.

The Act distinguishes between providers (those who build and place AI on the market) and deployers (those who use it). A bank that buys an AI credit scoring model from a vendor is a deployer. One that builds its own model is a provider. Providers carry the heavier burden under Articles 9-25: conformity assessments, technical documentation, logging, post-market monitoring, and registration in the EU AI database. Deployers face Article 26 obligations: human oversight, instructions-for-use compliance, and cooperation with provider audits.

Territorial scope is broad. Per recital 20 of the Act, the regulation applies when AI output affects people in the EU, regardless of where the AI provider is headquartered. Non-EU banks with EU customers aren't exempt. The European Parliament's overview of the Act confirms this extraterritorial application as a deliberate design choice.

Why this matters

The timeline pressure is real. A well-structured AI governance program for a mid-market bank typically takes 12-18 months to build from scratch, once you factor in vendor negotiations, policy drafting, staff training, and legal review. August 2026 isn't far off.

Enforcement won't be theoretical. National market surveillance authorities handle high-risk deployer compliance; the European AI Office covers GPAI. Penalties for high-risk AI violations reach €15 million or 3% of global annual turnover. Violations of prohibited AI rules are €30 million or 6%.

The Act intersects directly with supervisory examinations. EU banking supervisors, including the ECB for significant institutions, are folding AI governance into their programs. If your next regulatory exam touches AI, expect questions about Annex III classification, model documentation, and human oversight procedures. Banks that can't answer those questions are looking at enforcement consequences that go beyond a fine.

AI used for AML transaction monitoring sits in a grey zone. Pure alerting may not qualify as Annex III high-risk on its own, but if the model's output feeds into account closure, onboarding refusal, or credit decisions, the classification changes. You need a legal analysis of the full decision chain, not just the AI component in isolation.

Customer risk ratings generated by AI for KYC or credit purposes are likely in scope if they influence access to services. Article 9 requires a risk management system throughout the model lifecycle: data governance, bias and accuracy testing, human review procedures, and remediation processes. Those aren't optional add-ons; they're preconditions for deploying the system.

Perpetual KYC systems that use AI to monitor customer profiles continuously face the same obligations. When the AI triggers an action, whether an alert, a rating change, or an account hold, the documentation and human escalation path must be in place before that trigger fires.

CDD and EDD processes powered by AI are a natural starting point for your compliance inventory. Document the AI's role in each due diligence decision and map it against Annex III. If AI is deciding whether enhanced diligence applies, you're likely in high-risk territory.

Who needs to comply with the EU AI Act? Any provider or deployer whose AI affects EU residents. Geography of the company is irrelevant to scope.

The broader issue: banks that treat the EU AI Act as a technology project will miss it. It's a governance problem. Who owns AI oversight? Who reviews model outputs and signs off on the technical documentation? If those questions don't have clear answers today, the clock is already running.

Related questions

Related concepts and regulations


← All compliance questions