Tier 1: The Overview Copy link

State of Play Copy link

The headline: The EU moved early on AI regulation through the comprehensive AI Act, with the intention of gaining a first-mover economic advantage by setting the global regulatory “gold standard” for AI governance. The bloc is now seeking to balance the implementation of the AI Act with a renewed focus on improving the EU’s global competitiveness in AI and innovation.

The context: The EU has a thriving AI startup environment, due in part to its strong university and R&D culture. However, the risk-averse investment landscape means that scaling EU startups can be challenging, with US private AI investment levels now nearly eight-fold higher than those in the EU. To increase the EU’s competitiveness across the AI innovation chain, in February 2025 (as part of France’s AI Action Summit) the European Commission announced a new funding initiative, InvestAI, to mobilise €200 billion for AI investment from a combination of Member States and private sector partners. This includes a new fund of €20 billion to build state-of-the-art AI “gigafactories”, capable of powering the most advanced AI models.

The rules: The EU's AI Act establishes rules in every EU member state around the use, development, importing, distribution and manufacture of AI systems. Importantly, the AI Act will affect businesses both inside and outside the EU (see below). It sits alongside existing legislative frameworks such as those for data (GDPR) and consumer protection that capture some AI use cases, creating a complex web of regulatory requirements depending on the sector and product. Each EU country will be required to nominate a regulator (termed a "national competent authority") to monitor and enforce the implementation of the AI Act in their country.

What this means for founders Copy link

The AI Act’s broad ‘extra-territorial’ scope means that AI-enabled products sold, put into service, deployed, or the outputs of which are used in any EU market will need to be compliant, regardless of where the company is based and the models are trained. The majority of obligations under the AI Act apply to AI developers, with the most stringent requirements applying to AI systems that can be used in specified ‘high-risk’ use cases. The AI Act will be supplemented by guidance and ‘delegated acts’ from the EU Commission (see our full implementation timeline below), meaning that founders will need to constantly monitor the specific rules and enforcement deadlines that will affect them.  

We’re at the start of the EU’s AI Act journey — which will likely take quite some time. The hope is that it will provide some stability. While the EU scratches its head working out how to implement it, they should learn from the experience of GDPR. Copy link
Startup Coalition  Copy link
One aspect of the EU AI Act that people sometimes forget about, is that only a few "traditional" machine learning systems are considered as high-risk. There is an opportunity to demonstrate that your application isn’t impacting the health, safety or fundamental rights  of the user - which means that you can be exempted from the strictest obligations. Copy link
Agata Hidalgo Copy link
European Affairs Lead, France Digitale

Forward Look Copy link

Startups stand to benefit from the EU’s competitiveness pivot. Copy link

AI innovation is now seen as a strategic imperative in a fragmenting geopolitical picture. Initiatives such as the above mentioned InvestAI and AI Factories - currently set up in Finland, Germany (HammerHAI and JAIF), Greece, Italy, Luxembourg, Spain, Sweden, Austria, Bulgaria, France, Poland, and Slovenia - which aim to increase the provision of EU-located computing and supercomputing resources, are specifically targeted at startups in order to create an innovation pipeline to grow future “European AI champions” and reduce the dominance of US tech firms in the EU market.

Other startup-friendly AI policies to watch during the 2024-2029 European Commission mandate include the introduction of a Cloud and AI Development Act to accelerate data centre construction and increase compute capacity; an Apply AI Strategy to enhance new industrial applications of AI and improve public services; the establishment of the European AI Research Council to consolidate AI R&D resources; and an updated Data Strategy to enable more extensive data sharing among businesses.

Digital regulation - including the AI Act - is set to be streamlined as part of the competitiveness drive. Copy link

Concerns have mounted that difficulties with navigating the EU’s complex web of overlapping regulation could dissuade SMEs in particular from starting and scaling in the EU. This is likely to include better clarification of how potentially overlapping legislation - such as the AI Act and the GDPR - interact with each other, and some simplification or harmonisation of the compliance obligations across digital legislation for startups and mid-caps.

EU institutions, in particular the European Commission’s AI Office, are very open to having a dialogue on the EU AI Act’s implementation, and to make the framework work for founders. Copy link
Agata Hidalgo Copy link
European Affairs Lead, France Digitale

Timelines Copy link

February 2025: AI practices deemed to hold "unacceptable risk" become prohibited, and general provisions (e.g., requirements on businesses relating to ‘AI literacy’) will apply.

May 2025: The finalised Codes of Practice expected to be published.

2 August 2025: Obligations on developers of ‘general-purpose’ AI systems will come into force, and the EU’s review of amendments to the list of prohibited AI practices will be completed.

2 August 2025: Member States must designate national competent authorities to monitor the implementation of the Act in each jurisdiction.

By June 2026: The deadline by which the Copyright Directive must be reviewed, which could see the legislation reopened to include provisions on the interaction between generative AI and copyright law.

2 August 2026: Obligations for AI systems deemed to be "high-risk" (not including those intended to be used as a safety component of a product) will come into force.

2 August 2026: Deadline for Member States to implement rules on penalties and to establish at least one operational AI regulatory sandbox.

2 August 2026: The AI Office will issue further guidance for providers and deployers on the obligations to inform users when they are iterating with an AI system on this date.

Q4 2025-Q1 2025: AI & Cloud Development Act expected to be introduced.

August 2027: Obligations go into effect for high-risk AI systems that are intended to be used as a safety component of a product.

By the end of 2030: Obligations go into effect for certain AI systems that are components of large-scale IT systems established by EU law in the areas of freedom, security and justice.

National Competent Authorities Copy link

MEMBER STATE

AI ACT - NATIONAL COMPETENT AUTHORITY

DESIGNATION STATUS

Denmark

Yes

Estonia

No

Finland

Traficom (expected based on draft legislation)

No

France

CNIL (expected)

No

Germany

No

Italy

Digital Italy Agency and National Cybersecurity Agency (expected based on draft legislation)

Yes

Netherlands

No

Spain

Yes

Sweden

No

Tier 2: The Details Copy link

Policy and Legislative Landscape Copy link

INITIATIVE

DESCRIPTION

The legislation, which attained legal status on 1 August 2024, categorises AI according to its level of risk, with a sliding scale of regulatory burdens placed on the developers and deployers of AI systems as well as distributors, importers and product manufacturers. 

The risk hierarchy falls into four categories:

  • Prohibited AI practices: Examples include social scoring; exploiting vulnerabilities due to age, disability, etc.; using subliminal techniques to distort behaviour. This is a small set of designated AI practices that are prohibited under the AI Act because they cause an unacceptable level of societal risk.
  • High risk AI systems: The AI Act categorises as ‘high-risk’ those AI systems that are intended for:
    • (a) use as safety components in products (or are themselves products) that fall under certain EU product safety legislation (e.g. toys, cars), or
    • (b) certain other specified use-cases (e.g., placing of job advertisements; promotion or termination of workers; pricing for life or health insurance; traffic management).
    • However, there is also a narrow list of exceptions that will remove an AI system from the high-risk categorisation (e.g., where it is intended to be used for a narrow procedural task).
    • Both providers and deployers have obligations in respect of high-risk AI systems. For example, providers have obligations concerning design requirements, quality management systems, log keeping, corrective actions, CE marking, registrations in databases, accessibility and supply chains. Similarly, deployers have obligations regarding (among other things) human oversight, input controls, monitoring, log keeping and incident reporting.
  • AI systems subject to transparency requirements: The Act designates certain AI systems as presenting specific transparency risks, and so providers and deployers of these AI systems are required to make additional disclosures.
    • For example, providers of AI systems intended to interact directly with individuals (e.g., chatbots) must design their AI systems so that users are informed that they are interacting with an AI system (unless obvious), and providers of AI systems producing synthetic content (e.g., images) must (with limited exceptions) ensure outputs are marked and detectable as artificially-generated.
  • "General-purpose" AI systems: The AI Act includes rules for those AI models that display significant generality, are capable of competently performing a wide range of tasks, and are suitable for integration into various downstream systems (e.g. foundation models). The AI Act imposes obligations on providers (rather than deployers) of these models, requiring them to keep technical documentation about the model, provide these details to deployers and to the AI Office on request, put in place a policy to comply with EU copyright legislation, and make available a summary of their training data. The most powerful general-purpose AI systems, which the AI Act designates as having "systemic risk", are required to adhere to extra provisions on cybersecurity, incident reporting, risk mitigation and model evaluation.

This legislation came into force in January 2024 and will apply from September 2025, facilitating data-sharing in the EU by requiring data generated from connected devices to be made more widely accessible (e.g. smart appliances, health and fitness monitors, and connected cars). There will be obligations on the companies that own this type of data to share aspects of it on request with Governments, other businesses and the individuals whose data has been collected. Founders may stand to benefit from the increased availability of datasets.

Policymakers Copy link

NAME

DESCRIPTION

DG CNECT is the lead policy development and implementation unit for the EU’s "digital economy & society" and "research & innovation" briefs. The unit coordinates funding, policy and legislative initiatives across digital technologies (including AI).

DG GROW supports the EU’s entrepreneurship and SME landscape, as well as coordinating the bloc’s internal trade (the single market) and industrial strategy. Its responsibilities include implementing pro-growth policies for the benefit of founders and SMEs, facilitating access for EU businesses to funding and global markets, and managing some elements of the Horizon Europe R&D funding programme (see below).

Enforcers Copy link

NAME

DESCRIPTION

Established as a division within DG CNECT, the AI Office is set to become the centre of European AI expertise. Its main task will be to enforce the provisions of the AI Act equally across EU member states once they apply, with a focus on the model testing and documentation requirements for high-risk and general-purpose AI systems. This work will include drawing up the AI Act’s transparency requirements for model training data in line with EU copyright law, as well as issuing guidelines and codes of conduct to facilitate compliance with the AI Act. In line with the Commission’s wider objective of using the AI Act to facilitate rather than hinder innovation, the AI Office has also been tasked with working with the startup community to promote innovation ecosystems. 

Engagement is being channelled primarily through the AI Pact, a voluntary initiative to encourage industry compliance with the rules of the AI Act before it enters into force.

In charge of ensuring a consistent application and enforcement of data protection law across the European Economic Area (EEA), the the EDPB adopts binding decisions addressed to the national data protection authorities to settle disputes when they enforce the GDPR, and provides general guidance (including guidelines, recommendations and best practices) to clarify and promote a common understanding of EU data protection laws - including in the context of AI. 

For example, in December 2024, the EDPB published an opinion on data protection aspects related to the processing of personal data in the context of AI models, which relevant Member State data protection authorities will use as a central reference point for their own guidance provision.

Innovation Champions Copy link

NAME

DESCRIPTION

FOUNDER FOCUS

Horizon Europe is the EU’s R&D funding programme with an indicative budget of €95.5 billion for 2021-2027. The programme facilitates research collaboration and strengthens the impact of innovation in the EU. Since Horizon Europe was set up in January 2021, it has funded 297 AI-related projects to the tune of €57.7 million.

Look at their R&D funding calls for AI opportunities.

The EuroHPC JU was established as a joint initiative between the EU, European countries and private partners with the aim of developing a supercomputing, quantum computing, service and data infrastructure ecosystem in Europe. The undertaking has a budget of €7 billion for 2021-2027.  In January 2024, the Commission announced its intention to upgrade the EuroHPC JU supercomputers’ AI capabilities, with the aim of increasing European AI startups’ access to high-quality data and compute.

Find funding opportunities with a number of projects related to AI 

Digital Europe is a trade association that engages with policymakers to advocate for a tech-friendly regulatory space in the EU. Its network includes a membership of over 45,000 businesses that operate and invest in Europe, including 109 corporations that are global leaders in their field and 41 national trade associations from across Europe. On AI, Digital Europe has collected insights on AI in Europe with a particular interest in stimulating AI investment and uptake across Europe.

Join lobbying efforts on EU AI policy, and networking with other SMEs.

As the largest digital innovation ecosystem in Europe, the EIT is a founder-focused organisation whose mission is to strengthen Europe’s position in the world by helping digital innovations reach the market and providing entrepreneurs with technology, talent, and growth support. 

Join their network for more information about deep tech investment, training, and community opportunities for SMEs in Europe. 

With a network of over 45,000 technology SMEs in Europe, the Alliance is made up of 30 national and regional SME associations from EU member states and neighbouring countries, with the purpose of putting digital SMEs at the centre of the EU agenda. It manages EU-funded projects, puts on conferences and seminars, delivers position papers and research studies, and monitors EU digital policy and regulation.

Become a member to join their network with other SMEs in Europe to connect with European Member State networks, and liaise with policymakers.

An advocacy network focused on improving the policy environment for startups. They are particularly invested in issues which could grow digital entrepreneurship and the digital economy at large. With the mission of ensuring that the voices of startups are heard in government.

Become a member to get involved, vote on their Mandate, policy priorities, and run and vote for the Allied for Startups Board

An international non-profit organisation which serves as an independent, apolitical, member centric umbrella for national startup associations from Europe.

Become a member for networking and lobbying opportunities.