Responsible AI Platform

EU AI Act impact on startups & SMEs: 2025 guide

··14 min read
Delen:
Dutch version not available

A Berlin-based startup building AI-powered legal research tools spent eighteen months and over €200,000 preparing for EU AI Act compliance before their product launched. A competing startup in Singapore, targeting the same market but without EU operations, spent nothing. That gap illustrates the core tension that the EU AI Act creates for European startups and SMEs: the regulation imposes real costs that competitors in other jurisdictions do not bear, while simultaneously offering something those competitors cannot easily replicate, which is a credible regulatory framework that enterprises and public sector buyers increasingly require before purchasing AI products.

Whether the trade-off favors European AI startups depends heavily on their specific context. A startup building minimal-risk tools like spam filters or AI-assisted design software faces almost no compliance burden. A startup building AI for credit decisions, medical diagnostics, or employment screening faces the full weight of Chapter III requirements. Most startups fall somewhere in between, and the compliance calculus is genuinely uncertain.

The risk classification that determines everything

The AI Act's risk-based approach means that compliance obligations vary dramatically depending on what the AI system does, not just how it was built. The four-tier classification, from unacceptable through high-risk to limited and minimal risk, determines whether a startup has essentially no new obligations or faces a compliance program that rivals what the GDPR demanded.

Unacceptable risk systems are prohibited outright. These include social scoring systems that rank individuals based on their behavior across unrelated domains, AI systems that manipulate people through subliminal techniques that bypass their conscious awareness, and most real-time biometric identification in public spaces. No startup should be building these, and if your product description sounds like it might include any of these capabilities, getting legal advice before continuing development is urgent.

High-risk systems, defined in Annex III of the Act, are the category that creates the heaviest compliance burden for startups. Annex III covers eight domains: biometric identification and categorization, critical infrastructure management, education and vocational training, employment and worker management, essential private and public services (including credit and insurance), law enforcement, migration and border control, and administration of justice. Many B2B SaaS products for HR, finance, or public sector clients will fall into one of these categories regardless of whether their creators thought of them as "high-risk AI."

For limited-risk systems, primarily chatbots and deepfake-generating tools, the obligations are primarily transparency requirements: disclose that users are interacting with AI, label AI-generated synthetic media as such. These requirements add friction to product design but do not require the comprehensive documentation and conformity assessment that high-risk systems demand.

What high-risk compliance actually requires

For startups that build high-risk AI systems, the Act imposes a comprehensive set of obligations that begin during development and continue throughout the product's lifetime.

Article 9 requires a risk management system that is not a one-time analysis but an ongoing process lasting throughout the entire lifecycle of the system. This means documented risk identification and evaluation before deployment, risk mitigation built into the system design, and testing to verify that those mitigations work. The risk management process must be updated when the system changes significantly or when new risks are identified post-deployment.

Article 10 establishes data governance requirements for training, validation and testing data. The data must be relevant, representative, and free from errors that could cause the system to produce discriminatory or otherwise harmful outputs. Crucially, the data must be assessed for possible biases. For startups that train on publicly available datasets, this requires evaluating whether those datasets represent the full population that will be affected by the system, which in practice often means they do not.

Technical documentation under Article 11 and Annex IV must cover the general description of the system, the development process, the training methodology, the validation results, the capabilities and limitations, and the instructions for use. This documentation is not just for internal purposes; it must be made available to national market surveillance authorities on request, and deployers need access to sufficient information to evaluate whether the system meets their requirements.

Conformity assessment, under Article 43, must be completed before a high-risk system is placed on the market. For most high-risk systems, self-assessment by the provider is permitted. For a small subset of high-risk systems (biometric identification systems and systems under Annex III that are not already covered by existing sector-specific legislation), third-party conformity assessment by a notified body is required.

The provisions designed specifically for SMEs

The EU AI Act includes several provisions that specifically address the situation of small and medium-sized enterprises and startups, acknowledging that the compliance burden falls disproportionately hard on smaller organizations.

Article 55 requires member states to establish AI regulatory sandboxes that give SMEs priority access. Sandboxes are controlled environments where providers can develop and test AI systems in real or near-real conditions with regulatory supervision, without having completed the full conformity assessment. The sandbox does not exempt organizations from compliance obligations, but it provides a way to get regulatory guidance on difficult interpretive questions during development rather than discovering compliance gaps after launch.

The simplified technical documentation provision in Annex IV allows SMEs to use a more condensed format for their required documentation, reducing the administrative burden without eliminating the substantive requirement. Conformity assessment fees charged by national notified bodies must be adjusted to take account of the specific interests and needs of SMEs, which in practice means lower fees for smaller organizations.

National authorities are required to organize awareness-raising and training activities specifically tailored to the needs of SMEs, and must provide guidance and advice to SMEs seeking to navigate compliance requirements. When sanctions are determined for violations, member states must take account of SME interests to ensure proportionality.

The open source exception: narrower than it seems

Providers that release AI systems under free and open-source licenses are exempt from most obligations under the Act, as long as the system is not placed on the market as a high-risk system and does not fall into the transparency obligation categories for limited-risk systems. This exception is significant for startups whose business model involves open-source software with commercial services built around it.

The exception has important limitations. It does not apply to general-purpose AI models with systemic risk, the category defined in Article 51 that includes large foundation models trained with compute exceeding 10^25 FLOPs. Providers of such models, even open-source ones, face documentation, testing and incident reporting requirements. And the exception does not protect downstream deployers who use an open-source model in a high-risk context. If a startup builds a hiring tool on top of an open-source language model and deploys it for consequential employment decisions, that startup is a provider of a high-risk system and must comply accordingly, regardless of the original model's license.

The real competitive dynamics

The concern that the AI Act will disadvantage European AI companies relative to US, Chinese or other competitors is partially valid and partially overstated. For the minimal-risk and limited-risk categories that cover most consumer AI products, the compliance burden is genuinely modest, essentially requiring disclosure mechanisms that good product design would implement anyway.

For high-risk AI, the competitive dynamics are more complex. Enterprise buyers, particularly in finance, healthcare, and the public sector, increasingly require vendors to demonstrate regulatory compliance as a condition of purchase. A startup that has completed a thorough conformity assessment and can produce comprehensive technical documentation is in a stronger position with these buyers than a competitor that has not, regardless of where that competitor is headquartered. Many large organizations prefer to work with AI vendors whose products have been independently validated against a public regulatory standard rather than relying solely on vendor claims.

The more genuine competitive concern is the timeline disadvantage. Compliance takes time that a startup's international competitors do not have to spend. A high-risk AI product that could be launched in six months without compliance work may take twelve months with it, and in fast-moving markets that delay can determine whether a startup captures market position or loses it to faster-moving alternatives.

Practical steps for startups navigating the AI Act

The first priority for any AI startup is a clear classification of their system under the Act's risk tiers. This is not always obvious, and getting it wrong in either direction has costs: underestimating risk exposure creates compliance liability, while overestimating it leads to unnecessary compliance expenditure that could have funded development instead. The risk assessment tool can help structure this analysis.

For high-risk systems, the most valuable thing a startup can do early is build compliance requirements into the development process rather than retrofitting them afterward. This means maintaining data governance documentation from the start of model training, implementing logging architecture before deployment rather than after, and designing human oversight mechanisms into the product interface rather than adding them as an afterthought. Compliance-by-design is significantly cheaper than compliance-by-retrofit.

For startups seeking regulatory certainty on specific interpretive questions, the AI regulatory sandbox is the right mechanism. Member states are required to have sandboxes operational by August 2026, and several have already established preliminary programs. The sandbox provides a pathway to get regulator feedback on whether a specific design choice satisfies a specific regulatory requirement, which is far more valuable than a legal opinion that cannot bind the regulator.

For fundraising and partnerships, the AI Act compliance posture is increasingly a due diligence question. Investors who have been caught in GDPR non-compliance situations understand the litigation and regulatory risk of insufficient compliance documentation. Demonstrating a coherent compliance program, even if it is not yet complete, is part of responsible governance that investors in regulated technology companies increasingly expect.

The timeline pressure is real. The core obligations for high-risk systems apply from August 2, 2026. For startups building high-risk AI, that is not a distant deadline: the time to develop a risk management system, build compliant data governance, produce technical documentation, and complete a conformity assessment is measured in months, and starting that process late creates pressure that makes quality compromises more likely. Starting the compliance work in parallel with product development, not after it, is the approach that gives startups the best chance of reaching the market on schedule with regulatory exposure managed.

The Value Chain as Leverage for Small Developers

An often-overlooked aspect of the AI Act is that obligations are distributed across the value chain, not concentrated solely on the provider. This creates opportunities for small providers to manage compliance burden by leveraging deployer responsibilities. A startup building a high-risk AI system can contractually place certain obligations on deployers: completing the FRIA (Fundamental Rights Impact Assessment), organizing human oversight per Article 14, and incident reporting per Article 73. That significantly lightens the post-market operational burden on the startup.

The EU Model Contractual Clauses (MCC-AI) provide a starting point. There are two versions: comprehensive for high-risk systems and lighter for lower-risk applications. A startup using these clauses as a basis for supplier contracts demonstrates serious engagement with value-chain obligations without needing to reinvent contract language for each deal.

The second leverage point is regulatory sandboxes. Article 57 requires member states to have operational sandboxes by August 2026. These provide controlled environments where startups can test systems under regulatory supervision before full deployment, getting explicit feedback on compliance interpretations rather than discovering issues post-launch during enforcement actions. This de-risks the timeline considerably.

Fundraising Implications: Regulatory Posture as Due Diligence Factor

AI investors learned hard lessons during GDPR non-compliance scandals in portfolio companies. The AI Act adds a new layer to due diligence. For high-risk AI, Article 99 sets maximum fines at 3% of global annual revenue for violations of Chapter III obligations and 7% for violations of Article 5 prohibitions. For startups with limited revenue, the absolute caps of €15 million and €35 million are meaningful.

A coherent compliance program signals to investors that management understands regulatory risk and is actively managing it. For B2B startups selling to enterprise customers, particularly in public sector and financial services, compliance posture is increasingly a pre-contract due diligence requirement. Large buyers want vendors that can produce evidence of compliance: technical documentation, CE marking for high-risk systems, contractual provisions for audits.

Investors that conduct thorough due diligence on regulatory risk are protecting themselves. A startup that reaches Series A with a solid compliance foundation has dramatically lower post-investment integration risk than one that faces compliance scrambling six months before the August 2026 deadline.

Sector-Specific Examples: Navigating High-Risk Domains

A startup building AI for employment screening and hiring recommendations falls under Annex III, Category 4 (employment and worker management). Article 10 imposes data governance requirements: training data must be representative of the full relevant employment population, including historically underrepresented groups. A model trained on historical hiring decisions where women or migrants were systematically excluded will replicate that discrimination unless actively corrected. The startup must demonstrate this via Article 11 technical documentation.

A fintech startup building AI-assisted credit scoring falls under Annex III, Category 5 (essential private services). Before market introduction, the startup must complete conformity assessment, produce technical documentation per Article 11, implement the Article 9 risk management system, and ensure deployers can implement Article 26 obligations. Each deployer integrating the system into their platform takes deployer obligations for their specific context.

A healthtech startup building predictive health monitoring via wearables may fall under Annex III (if diagnosing or recommending treatment) or under Medical Device Regulation 2017/745. The overlap of AI Act with sectoral regulation is critical: for medical AI, obligations stack rather than replace. Compliance means satisfying both MDR and AI Act requirements.

Practical Roadmap for Startups

The most time-saving first step is system classification. Use the risk assessment tool to determine which risk tier applies. That immediately clarifies which obligations are relevant.

For high-risk systems, begin technical documentation now. Building documentation per Annex IV takes months and is far harder to reconstruct post-launch than to maintain in parallel during development.

When your member state's regulatory sandbox becomes operational (required by August 2026), consider sandbox participation. Direct dialogue with regulators on interpretation questions before formal enforcement phase reduces post-launch regulatory risk substantially.

Finally, contractual clarity on deployer-provider responsibilities is essential. Use MCC-AI as a starting point, but customize for your business model. The goal is to define clearly: which party holds which obligations, what information flows in which direction, how incidents are reported, and who bears liability for what.

Frequently Asked Questions

On LearnWize:EU AI Act ComplianceTry it free

From risk classification to conformity assessment: learn it in 10 interactive modules.

Take the free AI challenge