AI-governance Published: Updated: By

Who needs to comply with the EU AI Act?

Quick answer

The EU AI Act applies to providers placing AI systems on the EU market and to deployers using AI professionally in the EU, regardless of where the company is based. Non-EU firms are covered if their AI outputs affect EU users. The law is Regulation (EU) 2024/1689. ---

The full answer

The EU AI Act (Regulation (EU) 2024/1689) applies to four parties:

  • Providers: any company or individual that develops an AI system and places it on the EU market or puts it into service in the EU
  • Deployers: any professional user of an AI system operating in the EU, regardless of where they're headquartered
  • Importers: EU-established entities importing AI systems from outside the EU
  • Distributors: any entity making AI systems available in the EU supply chain

The geographic reach is extraterritorial. A Singapore-based vendor whose risk-scoring model is licensed to a German bank is a provider under the Act. This isn't an edge-case reading. Article 2(1)(c) explicitly covers non-EU providers whose AI outputs are used in the EU, using the same territorial logic as GDPR.

High-risk AI in financial services

Annex III of the Act lists AI systems automatically classified as high-risk. The categories directly affecting financial institutions:

  • Credit scoring and creditworthiness assessment (Annex III, point 5(b)): any AI that evaluates whether an individual or business qualifies for credit
  • Life and health insurance risk assessment and pricing
  • Biometric categorization and emotion recognition
  • Employment, HR, and workforce management decisions

Any bank, insurer, or fintech deploying AI in these functions is a deployer with full Title III compliance obligations: conformity assessments, human oversight mechanisms, technical documentation, and incident reporting to national supervisory authorities.

The provider-deployer line shifts more than expected

Article 25 says a deployer becomes a provider if it modifies an AI system's intended purpose or fine-tunes a model for its own use. Banks that customize third-party AML or credit models are generally providers, not deployers. The distinction matters because providers carry the full documentation and assessment burden.

If your compliance team is assuming the vendor handles all the obligations, check that assumption against what your engineers actually did to the model before deployment.

Who is not covered

The Act doesn't apply to:

  • AI for military or national security purposes
  • AI used by individuals in a purely personal, non-professional context
  • AI under research and development before market placement
  • Free, open-source AI (with exceptions for general-purpose AI models that pose systemic risk)

Penalties

Non-compliance penalties reach €35 million or 7% of global annual turnover for violations involving prohibited AI. High-risk AI violations carry a ceiling of €15 million or 3% of global turnover. Providing false information to authorities: €7.5 million or 1.5% of turnover. These figures come from Article 99 of the official Act text.

Enforcement timelines

Date Provisions
February 2, 2025 Prohibited AI provisions in force
August 2, 2025 GPAI obligations, AI literacy, governance structure
August 2, 2026 All Annex III high-risk AI obligations enforced
August 2, 2027 High-risk AI embedded in regulated products

Most financial services AI (credit models, AML systems, fraud detection tools) falls under Annex III. August 2026 is the hard deadline for most banks.

Why this matters

Banks and financial institutions are disproportionately affected. Credit scoring is named in Annex III. AI used for AML transaction monitoring could qualify as high-risk depending on whether it makes or informs individual-level decisions. The line between "decision support" and "consequential AI decision" is narrow, and regulators will draw it conservatively.

What triggers a regulatory exam is expanding: the European Banking Authority lists AI governance as a supervisory focus area for financial institutions. Banks deploying AI-driven customer risk scoring (relevant to how often customer risk ratings should be refreshed and the difference between CDD and EDD) should assume those systems are in scope.

The dual compliance burden is real. A bank using AI for perpetual KYC must satisfy both GDPR automated decision-making rules (Article 22) and EU AI Act high-risk obligations. These aren't duplicative; they cover different ground. GDPR governs data processing. The AI Act governs system design, documentation, and human oversight.

Institutions that fail an AML exam while also carrying undocumented AI models will face compounded findings across both regimes. Getting conformity documentation in place before 2026 is cleaner than explaining gaps under exam conditions.

When the EU AI Act takes effect varies by provision. August 2026 is the date that matters most for financial services AI.

Related questions

Related concepts and regulations


← All compliance questions