- 1. EU AI Act fines prohibited AI up to 7% global turnover or €35M.
- 2. High-risk AI obligations roll out 2026-2027 in 27 EU states.
- 3. Risk tiers range from outright bans to voluntary codes.
By Craig Osborne, EU Correspondent
The EU AI Act (Regulation (EU) 2024/1689) took effect on 1 August 2024 after adoption by the European Parliament and Council on 13 March 2024. The European Commission classifies AI systems into four risk tiers, with prohibited practices facing fines up to 7% of global annual turnover or €35 million, per the official EUR-Lex text.
General-purpose AI (GPAI) models require transparency obligations from 2 August 2025, according to the European Commission's Digital Strategy page. High-risk AI in finance, such as credit scoring tools, demands conformity assessments under Article 15.
National competent authorities enforce rules and report to the Commission. Providers must draft voluntary codes by May 2025. The new AI Office oversees systemic-risk GPAI models like large language models.
Risk-Based Framework in EU AI Act
The Parliament approved the Act following trilogue negotiations in December 2023. Prohibited AI, including social scoring and real-time remote biometric identification in public spaces, bans outright from 2 February 2025 (Article 5). France and Germany secured carve-outs for innovation sandboxes during final talks.
This regulation harmonizes rules across the EU single market. Finance ministries align with bodies like the European Banking Authority (EBA). The EBA classifies robo-advisory and algorithmic trading as high-risk if they threaten financial stability.
Obligations for High-Risk AI Providers
High-risk providers ensure data governance, technical documentation, human oversight, and cybersecurity (Articles 9-15). They register systems in the EU database before market placement. Deployers, including banks, report serious incidents annually to authorities.
Trading algorithms undergo checks for market stability risks. Importers and distributors share liability under Article 28. The Act targets biases in loan approvals and hiring tools, citing Annex III examples.
- Risk Category: Prohibited · Examples: Social scoring, remote biometric ID · Timeline: Feb 2025 · Key Obligation: Banned outright
- Risk Category: High-Risk · Examples: Credit scoring, hiring AI, robo-advisory · Timeline: Aug 2026-2027 · Key Obligation: Conformity assessment
- Risk Category: GPAI/Limited · Examples: Chatbots, deepfakes, open-source models · Timeline: Aug 2025 · Key Obligation: Transparency labels
- Risk Category: Minimal · Examples: Video games, spam filters · Timeline: None · Key Obligation: Voluntary codes
Enforcement and Fine Structure
Prohibited violations carry the higher of €35 million or 7% global turnover; high-risk breaches hit €15 million or 3%, as detailed in Article 71 and reported by Reuters on 1 August 2024. The Commission issues remedies like product bans; national authorities investigate cases.
Irish startups and German firms establish dedicated compliance teams. Non-EU providers, such as U.S. tech giants, adapt to access the single market.
Market Access Through Compliance
Certified high-risk AI fosters trust akin to GDPR compliance. SMEs gain support via AI Office regulatory sandboxes. The European Central Bank (ECB) simulates AI tools for monetary policy, attracting €500 million in investments per ECB reports.
The Netherlands leads with national sandboxes. Compliant firms dominate cross-border finance services.
Finance Sector Implications
Fraud detection and portfolio optimization AI qualify as high-risk under Annex III. The Act complements the Digital Operational Resilience Act (DORA), effective January 2025. Banks test algorithms for bias, per EBA guidelines issued 15 July 2024.
ESMA pilots fintech AI applications. The ECB explores AI in supervisory stress tests. These rules spur €2 billion in cross-border AI innovation funding, according to Commission estimates.
Key Timeline and Next Steps
GPAI rules activate 2 August 2025; most high-risk obligations follow 2 August 2026, with exceptions to 2027. Member states designate authorities by August 2026. The AI Office finalizes codes by mid-2025 and updates for multimodal AI.
Tech executives prepare now: non-compliance risks market exclusion. The Commission launches guidance webinars in Q1 2025.
Frequently Asked Questions
What is the EU AI Act?
Regulation (EU) 2024/1689 classifies AI by risk into four categories. It bans prohibited practices from February 2025 and sets obligations for high-risk systems across 27 states via national authorities.
What are high-risk AI systems under EU AI Act?
High-risk includes credit scoring and hiring AI. Providers perform conformity assessments, ensure transparency. Rules apply from August 2026-2027.
How does EU AI Act affect financial services?
Banking AI like fraud detection is high-risk. Aligns with DORA; EBA oversees robo-advisory. Fines reach 3% turnover for violations.
When do EU AI Act obligations start?
Prohibitions: February 2025; GPAI: August 2025; high-risk: 2026-2027. Codes by May 2025; AI Office manages rollout.



