Tier 1: The Overview Copy link

State of Play Copy link

The headline: The EU has moved early on AI regulation and is seeking to gain a first-mover economic advantage by setting the global regulatory "gold standard".

The context: The EU has a thriving AI startup environment, due in part to its strong university and R&D culture. However, the risk-averse investment landscape means that scaling EU startups can be challenging, with US private AI investment levels now nearly eight-fold higher than those in the EU. To counteract this, public funding for AI is growing, channelled through a patchwork of EU-wide initiatives.

The rules: The EU's AI Act establishes rules in every EU member state around the use, development, importing, distribution and manufacture of AI systems. Importantly, the AI Act will affect businesses both inside and outside the EU (see below). It sits alongside existing legislative frameworks such as those for data (GDPR) and consumer protection that capture some AI use cases, creating a complex web of regulatory requirements depending on the sector and product. Each EU country will be required to nominate a regulator (termed a "national competent authority") to monitor and enforce the implementation of the AI Act in their country.

What this means for founders Copy link

The AI Act’s broad ‘extra-territorial’ scope means that AI-enabled products sold, put into service, deployed, or the outputs of which are used in any EU market will need to be compliant, regardless of where the company is based and the models are trained. The majority of obligations under the AI Act apply to AI developers, with the most stringent requirements applying to AI systems that can be used in specified ‘high-risk’ use cases. The AI Act will be supplemented by guidance and ‘delegated acts’ from the EU Commission (see our full implementation timeline below), meaning that founders will need to constantly monitor the specific rules and enforcement deadlines that will affect them.  

We’re at the start of the EU’s AI Act journey — which will likely take quite some time. The hope is that it will provide some stability. While the EU scratches its head working out how to implement it, they should learn from the experience of GDPR. Copy link
Startup Coalition  Copy link
One aspect of the EU AI Act that people sometimes forget about, is that only a few "traditional" machine learning systems are considered as high-risk. There is an opportunity to demonstrate that your application isn’t impacting the health, safety or fundamental rights  of the user - which means that you can be exempted from the strictest obligations. Copy link
Agata Hidalgo Copy link
European Affairs Lead, France Digitale

Forward Look Copy link

EU startups stand to benefit from the European Commission’s renewed focus on increasing EU competitiveness Copy link

After winning a second term as Commission President, Ursula von der Leyen has shifted her focus towards increasing EU resilience and capabilities in AI and tech. The most important innovation initiatives promised during her political guidelines for the 2024-2029 Commission mandate are: "AI factories" to ensure access to new, tailored supercomputing capacity for AI startups; an Apply AI Strategy to enhance new industrial applications of AI and improve public services; a European AI Research Council to consolidate AI R&D resources; and a Data Union Strategy to enable more extensive data sharing among businesses.

The implementation of the AI Act is still at an early stage Copy link

Although the AI Act became law in August 2024, it won’t be implemented in full until the end of 2030 (though for most businesses relevant rules will apply by August 2026). There is a phased implementation period for parts of the rulebook, to give the Commission and AI Office time to lay out the technical details and benchmarks needed to enforce the Act. There has been substantive commentary on the effects of the Act on the EU as an AI founder destination: some have commented that the Act presents an opportunity for founders to specialise and grow within the EU’s regulatory system while others have noted that the perceived complexity may lead to delayed AI product launches in the EU. It is too early to tell what the Act will mean in practice for AI startups operating in the EU.

EU institutions, in particular the European Commission’s AI Office, are very open to having a dialogue on the EU AI Act’s implementation, and to make the framework work for founders. Copy link
Agata Hidalgo Copy link
European Affairs Lead, France Digitale

Timelines Copy link

November 2024: The first draft of the Codes of Practice (essentially the technical guidelines for general purpose AI model compliance with the AI Act) is expected to be published.

Q4 2024: Work on the AI Liability Directive is expected to restart, after the legislation was paused to examine areas of overlap with the Product Liability Directive and the AI Act.

February 2025: AI practices deemed to hold "unacceptable risk" will be prohibited, and general provisions (e.g., requirements on businesses relating to ‘AI literacy’) will apply.

May 2025: The finalised Codes of Practice will be published.

2 August 2025: Obligations on developers of ‘general-purpose’ AI systems will come into force, and the EU’s review of amendments to the list of prohibited AI practices will be completed.

2 August 2025: Member States must designate national competent authorities to monitor the implementation of the Act in each jurisdiction.

By June 2026: The deadline by which the Copyright Directive must be reviewed, which could see the legislation reopened to include provisions on the interaction between generative AI and copyright law.

2 August 2026: Obligations for AI systems deemed to be "high-risk" (not including those intended to be used as a safety component of a product) will come into force.

2 August 2026: Deadline for Member States to implement rules on penalties and to establish at least one operational AI regulatory sandbox.

2 August 2026: The AI Office will issue further guidance for providers and deployers on the obligations to inform users when they are iterating with an AI system on this date.

August 2027: Obligations go into effect for high-risk AI systems that are intended to be used as a safety component of a product.

By end of 2030: Obligations go into effect for certain AI systems that are components of large-scale IT systems established by EU law in the areas of freedom, security and justice.

National Competent Authorities Copy link

MEMBER STATE

AI ACT - NATIONAL COMPETENT AUTHORITY

DESIGNATION STATUS

Denmark

Yes

Estonia

No

Finland

No

France

CNIL (expected)

No

Germany

No

Italy

Yes

Netherlands

No

Spain

Spanish AI Supervision Agency (AESIA) [website not yet operational]

Yes

Sweden

No

Tier 2: The Details Copy link

Policy and Legislative Landscape Copy link

INITIATIVE

DESCRIPTION

The legislation, which attained legal status on 1 August 2024, categorises AI according to its level of risk, with a sliding scale of regulatory burdens placed on the developers and deployers of AI systems as well as distributors, importers and product manufacturers. 

The risk hierarchy falls into four categories:

  • Prohibited AI practices: Examples include social scoring; exploiting vulnerabilities due to age, disability, etc.; using subliminal techniques to distort behaviour. This is a small set of designated AI practices that are prohibited under the AI Act because they cause an unacceptable level of societal risk.
  • High risk AI systems: The AI Act categorises as ‘high-risk’ those AI systems that are intended for:
    • (a) use as safety components in products (or are themselves products) that fall under certain EU product safety legislation (e.g. toys, cars), or
    • (b) certain other specified use-cases (e.g., placing of job advertisements; promotion or termination of workers; pricing for life or health insurance; traffic management).
    • However, there is also a narrow list of exceptions that will remove an AI system from the high-risk categorisation (e.g., where it is intended to be used for a narrow procedural task).
    • Both providers and deployers have obligations in respect of high-risk AI systems. For example, providers have obligations concerning design requirements, quality management systems, log keeping, corrective actions, CE marking, registrations in databases, accessibility and supply chains. Similarly, deployers have obligations regarding (among other things) human oversight, input controls, monitoring, log keeping and incident reporting.
  • AI systems subject to transparency requirements: The Act designates certain AI systems as presenting specific transparency risks, and so providers and deployers of these AI systems are required to make additional disclosures.
    • For example, providers of AI systems intended to interact directly with individuals (e.g., chatbots) must design their AI systems so that users are informed that they are interacting with an AI system (unless obvious), and providers of AI systems producing synthetic content (e.g., images) must (with limited exceptions) ensure outputs are marked and detectable as artificially-generated.
  • "General-purpose" AI systems: The AI Act includes rules for those AI models that display significant generality, are capable of competently performing a wide range of tasks, and are suitable for integration into various downstream systems (e.g. foundation models). The AI Act imposes obligations on providers (rather then deployers) of these models, requiring them to keep technical documentation about the model, provide these details to deployers and to the AI Office on request, put in place a policy to comply with EU copyright legislation, and make available a summary of their training data. The most powerful general-purpose AI systems, which the AI Act designates as having "systemic risk", are required to adhere to extra provisions on cybersecurity, incident reporting, risk mitigation and model evaluation.

This legislation came into force in January 2024 and will apply from September 2025, facilitating data-sharing in the EU by requiring data generated from connected devices to be made more widely accessible (e.g. smart appliances, health and fitness monitors, and connected cars). There will be obligations on the companies that own this type of data to share aspects of it on request with Governments, other businesses and the individuals whose data has been collected. Founders may stand to benefit from the increased availability of datasets.

Due to the unique properties of AI systems (e.g., black box decision-making) there have been long-standing legal difficulties in proving causal links between harmful outputs of AI systems and faults of AI system providers. Accordingly, the AI Liability Directive is being prepared to update national civil liability rules across the EU to make it easier for victims of AI-caused damage to prove who is liable and to receive compensation. The AI Liability Directive is expected to do this by requiring national courts to presume that the provider of an AI system is liable for damage in certain circumstances. It is also expected to empower national courts to order the providers of certain AI systems to disclose information about those AI systems, as part of a claim against the providers for damage suspected to have been caused by the AI systems. The form of the Directive is currently in flux.

Policymakers Copy link

NAME

DESCRIPTION

DG CNECT is the lead policy development and implementation unit for the EU’s "digital economy & society" and "research & innovation" briefs. The unit coordinates funding, policy and legislative initiatives across digital technologies (including AI).

DG GROW supports the EU’s entrepreneurship and SME landscape, as well as coordinating the bloc’s internal trade (the single market) and industrial strategy. Its responsibilities include implementing pro-growth policies for the benefit of founders and SMEs, facilitating access for EU businesses to funding and global markets, and managing some elements of the Horizon Europe R&D funding programme (see below).

Enforcers Copy link

NAME

DESCRIPTION

Established as a division within DG CNECT, the AI Office is set to become the centre of European AI expertise. Its main task will be to enforce the provisions of the AI Act equally across EU member states once they apply, with a focus on the model testing and documentation requirements for high-risk and general-purpose AI systems. This work will include drawing up the AI Act’s transparency requirements for model training data in line with EU copyright law, as well as issuing guidelines and codes of conduct to facilitate compliance with the AI Act. In line with the Commission’s wider objective of using the AI Act to facilitate rather than hinder innovation, the AI Office has also been tasked with working with the startup community to promote innovation ecosystems. 

Engagement is being channelled primarily through the AI Pact, a voluntary initiative to encourage industry compliance with the rules of the AI Act before it enters into force.

Innovation Champions Copy link

NAME

DESCRIPTION

FOUNDER FOCUS

Horizon Europe is the EU’s R&D funding programme with an indicative budget of €95.5 billion for 2021-2027. The programme facilitates research collaboration and strengthens the impact of innovation in the EU. Since Horizon Europe was set up in January 2021, it has funded 297 AI-related projects to the tune of €57.7 million.

Look at their R&D funding calls for AI opportunities.

The EuroHPC JU was established as a joint initiative between the EU, European countries and private partners with the aim of developing a supercomputing, quantum computing, service and data infrastructure ecosystem in Europe. The undertaking has a budget of €7 billion for 2021-2027.  In January 2024, the Commission announced its intention to upgrade the EuroHPC JU supercomputers’ AI capabilities, with the aim of increasing European AI startups’ access to high-quality data and compute.

Find funding opportunities with a number of projects related to AI 

Digital Europe is a trade association that engages with policymakers to advocate for a tech-friendly regulatory space in the EU. Its network includes a membership of over 45,000 businesses that operate and invest in Europe, including 109 corporations that are global leaders in their field and 41 national trade associations from across Europe. On AI, Digital Europe has collected insights on AI in Europe with a particular interest in stimulating AI investment and uptake across Europe.

Join lobbying efforts on EU AI policy, and networking with other SMEs.

As the largest digital innovation ecosystem in Europe, the EIT is a founder-focused organisation whose mission is to strengthen Europe’s position in the world by helping digital innovations reach the market and providing entrepreneurs with technology, talent, and growth support. 

Join their network for more information about deep tech investment, training, and community opportunities for SMEs in Europe. 

With a network of over 45,000 technology SMEs in Europe, the Alliance is made up of 30 national and regional SME associations from EU member states and neighbouring countries, with the purpose of putting digital SMEs at the centre of the EU agenda. It manages EU-funded projects, puts on conferences and seminars, delivers position papers and research studies, and monitors EU digital policy and regulation.

Become a member to join their network with other SMEs in Europe to connect with European Member State networks, and liaise with policymakers.

An advocacy network focused on improving the policy environment for startups. They are particularly invested in issues which could grow digital entrepreneurship and the digital economy at large. With the mission of ensuring that the voices of startups are heard in government.

Become a member to get involved, vote on their Mandate, policy priorities, and run and vote for the Allied for Startups Board

An international non-profit organisation which serves as an independent, apolitical, member centric umbrella for national startup associations from Europe.

Become a member for networking and lobbying opportunities.