Brussels, November 14, 2024 – In a landmark move for Europe's burgeoning AI landscape, the European Commission on November 13 released draft Codes of Practice for providers of general-purpose AI (GPAI) models under the flagship EU AI Act. This development signals the bloc's commitment to fostering innovation while embedding robust safeguards against systemic risks posed by powerful AI systems.
The EU AI Act, which entered into force on August 1, 2024, represents the world's first comprehensive horizontal regulatory framework for AI. It categorizes AI systems by risk levels, with GPAI models – think foundational models like those powering ChatGPT or Google's Gemini – falling under high scrutiny due to their potential for widespread societal impact. The draft codes, open for stakeholder feedback until February 28, 2025, outline voluntary yet highly recommended practices to help companies comply with the Act's obligations.
Background: From Vision to Implementation
The AI Act's journey began in 2021 amid growing concerns over AI's unchecked proliferation. Following trilogue negotiations and adoption by the European Parliament and Council earlier in 2024, the regulation sets timelines for compliance: prohibited AI practices banned from February 2025, general obligations from August 2025, and GPAI rules from August 2026 (with stricter measures for systemic risk models from 2027).
GPAI models, defined as those trained with large datasets and capable of diverse tasks, must adhere to transparency, risk assessment, and documentation requirements. Providers face duties like technical documentation, copyright compliance, and systemic risk mitigation – measures now detailed in these 200+ page drafts.
The codes were developed through a multi-stakeholder process involving industry, academia, civil society, and regulators. Over 100 entities submitted initial feedback in September, influencing the current iteration. Key architects include the Commission, member state experts, and bodies like the French AI Agency (AIA) and Germany's Federal Office for Information Security (BSI).
Key Provisions in the Draft Codes
The documents span four main areas:
1. Transparency and Disclosure: Providers must publish detailed summaries of training data, model capabilities, and limitations. For instance, summaries should cover data sources, volumes, and curation methods, excluding sensitive IP details.
2. Risk Management: Emphasis on classifying models as 'systemic risk' if compute power exceeds 10^25 FLOPs. Mitigation includes adversarial testing, red-teaming, and incident reporting. Codes recommend frameworks like NIST's AI Risk Management Framework, adapted for EU contexts.
3. Technical Documentation: Standardized templates for model cards, ensuring reproducibility for authorities without revealing trade secrets.
4. Copyright and Data Governance: Guidance on respecting EU copyright laws, including opt-out mechanisms for creators – a nod to ongoing debates with US tech giants.
For systemic risk models, additional codes (to be published later) will cover cybersecurity and energy efficiency, reflecting Europe's green tech priorities.
| Aspect | Requirements | Timeline | |--------|-------------|----------| | Transparency | Model cards & summaries | Aug 2026 | | Risk Assessment | Red-teaming & reporting | Ongoing | | Systemic Models | Enhanced mitigations | Aug 2027 | | Feedback Period | Stakeholder input | Until Feb 28, 2025 |
Implications for European Tech Ecosystem
Europe's AI sector, valued at €15 billion in 2023 and projected to hit €200 billion by 2030, stands at a crossroads. Homegrown players like France's Mistral AI, Germany's Aleph Alpha, and the UK's Stability AI could gain a competitive edge through early compliance, positioning themselves as trusted providers amid US-China rivalry.
Mistral, fresh off a €600 million funding round in June 2024, has voiced support for clear rules, stating the codes will 'accelerate safe innovation.' Similarly, the UK – post-Brexit but aligned via the AI Safety Summit in Bletchley Park (2023) – is mirroring efforts with its own AI codes expected soon.
However, challenges loom. Smaller startups decry administrative burdens, while Big Tech lobbies for flexibility. OpenAI, Meta, and Google – all with EU footprints – must adapt. Apple's recent €18 billion fine under DMA underscores enforcement zeal; expect similar for AI.
Critics, including Max Tegmark's Future of Life Institute, argue codes lack enforcement teeth, relying on self-assessment. Civil society groups like AlgorithmWatch push for stronger audits.
Broader European and Global Context
This release aligns with the Commission's October 2024 AI Continent Action Plan, aiming for €100 billion in AI investments by 2030. It complements national strategies: France's France 2030 allocates €2.5 billion to AI, while Germany's KI-Strategie invests €5 billion.
Globally, the EU leads. The US's fragmented state laws pale against the AI Act's cohesion, while China's regulations prioritize control over openness. The codes could inspire international standards, as seen in G7 Hiroshima AI Process commitments.
Energy concerns are acute: Training GPT-4 reportedly consumed 50 GWh; EU codes urge efficiency benchmarks, tying into the Green Deal.
What's Next?
Post-feedback, final codes will be adopted by mid-2025, overseen by the new European AI Office led by Lucilla Sioli. Non-compliance risks fines up to 7% of global turnover – a deterrent for non-EU firms.
For businesses: Engage now via the 'Have Your Say' portal. Conduct gap analyses against codes. European startups should leverage funding like Horizon Europe (€1.7 billion for AI).
As AI permeates sectors from healthcare (e.g., Siemens Healthineers' AI diagnostics) to autonomous vehicles (Mobileye's EU pilots), these codes pave a balanced path. Europe risks falling behind without them, but over-regulation could stifle dynamism.
In the words of Commission VP Henna Virkkunen: 'These codes will make Europe the place where powerful AI is developed and deployed responsibly.'
The AI Act isn't just regulation – it's Europe's bid for technological sovereignty in the AI era.
Europe World News will monitor developments closely.
(Word count: 912)



