Tier 1: The Overview Copy link
State of Play Copy link
The headline: The EU has moved early on AI regulation and is seeking to gain a first-mover economic advantage by setting the global regulatory "gold standard".
The context: The EU has a thriving AI startup environment, due in part to its strong university and R&D culture. However, the risk-averse investment landscape means that scaling EU startups can be challenging, with US private AI investment levels now nearly eight-fold higher than those in the EU. To counteract this, public funding for AI is growing, channelled through a patchwork of EU-wide initiatives.
The rules: The EU's AI Act establishes rules in every EU member state around the use, development, importing, distribution and manufacture of AI systems. Importantly, the AI Act will affect businesses both inside and outside the EU (see below). It sits alongside existing legislative frameworks such as those for data (GDPR) and consumer protection that capture some AI use cases, creating a complex web of regulatory requirements depending on the sector and product. Each EU country will be required to nominate a regulator (termed a "national competent authority") to monitor and enforce the implementation of the AI Act in their country.
What this means for founders Copy link
The AI Act’s broad ‘extra-territorial’ scope means that AI-enabled products sold, put into service, deployed, or the outputs of which are used in any EU market will need to be compliant, regardless of where the company is based and the models are trained. The majority of obligations under the AI Act apply to AI developers, with the most stringent requirements applying to AI systems that can be used in specified ‘high-risk’ use cases. The AI Act will be supplemented by guidance and ‘delegated acts’ from the EU Commission (see our full implementation timeline below), meaning that founders will need to constantly monitor the specific rules and enforcement deadlines that will affect them.
One aspect of the EU AI Act that people sometimes forget about, is that only a few "traditional" machine learning systems are considered as high-risk. There is an opportunity to demonstrate that your application isn’t impacting the health, safety or fundamental rights of the user - which means that you can be exempted from the strictest obligations. Copy link
Agata Hidalgo Copy link
European Affairs Lead, France Digitale
Forward Look Copy link
EU startups stand to benefit from the European Commission’s renewed focus on increasing EU competitiveness Copy link
After winning a second term as Commission President, Ursula von der Leyen has shifted her focus towards increasing EU resilience and capabilities in AI and tech. The most important innovation initiatives promised during her political guidelines for the 2024-2029 Commission mandate are: "AI factories" to ensure access to new, tailored supercomputing capacity for AI startups; an Apply AI Strategy to enhance new industrial applications of AI and improve public services; a European AI Research Council to consolidate AI R&D resources; and a Data Union Strategy to enable more extensive data sharing among businesses.
The implementation of the AI Act is still at an early stage Copy link
Although the AI Act became law in August 2024, it won’t be implemented in full until the end of 2030 (though for most businesses relevant rules will apply by August 2026). There is a phased implementation period for parts of the rulebook, to give the Commission and AI Office time to lay out the technical details and benchmarks needed to enforce the Act. There has been substantive commentary on the effects of the Act on the EU as an AI founder destination: some have commented that the Act presents an opportunity for founders to specialise and grow within the EU’s regulatory system while others have noted that the perceived complexity may lead to delayed AI product launches in the EU. It is too early to tell what the Act will mean in practice for AI startups operating in the EU.
Timelines Copy link
November 2024: The first draft of the Codes of Practice (essentially the technical guidelines for general purpose AI model compliance with the AI Act) is expected to be published.
Q4 2024: Work on the AI Liability Directive is expected to restart, after the legislation was paused to examine areas of overlap with the Product Liability Directive and the AI Act.
February 2025: AI practices deemed to hold "unacceptable risk" will be prohibited, and general provisions (e.g., requirements on businesses relating to ‘AI literacy’) will apply.
May 2025: The finalised Codes of Practice will be published.
2 August 2025: Obligations on developers of ‘general-purpose’ AI systems will come into force, and the EU’s review of amendments to the list of prohibited AI practices will be completed.
2 August 2025: Member States must designate national competent authorities to monitor the implementation of the Act in each jurisdiction.
By June 2026: The deadline by which the Copyright Directive must be reviewed, which could see the legislation reopened to include provisions on the interaction between generative AI and copyright law.
2 August 2026: Obligations for AI systems deemed to be "high-risk" (not including those intended to be used as a safety component of a product) will come into force.
2 August 2026: Deadline for Member States to implement rules on penalties and to establish at least one operational AI regulatory sandbox.
2 August 2026: The AI Office will issue further guidance for providers and deployers on the obligations to inform users when they are iterating with an AI system on this date.
August 2027: Obligations go into effect for high-risk AI systems that are intended to be used as a safety component of a product.
By end of 2030: Obligations go into effect for certain AI systems that are components of large-scale IT systems established by EU law in the areas of freedom, security and justice.