2025 Year in Review and outlook on the crucial implementation year 2026
In 2025, the EU AI Act entered its implementation phase. Following formal approval in mid-2024, the past year has been marked by important steps in entry into force, political discussions, national preparations, and reactions from the technology industry. In this blog, we provide a holistic overview of developments in 2025 and look ahead to what can be expected in 2026. We discuss the phased introduction of the law, decisive political milestones, implementation and reactions in member states (including the Netherlands), business positions, concerns about implementation, and upcoming deadlines and guidelines.
Phased Implementation: Which Rules Applied in 2025?
The AI Act entered into force on August 1, 2024, but obligations become effective step by step to give stakeholders time to adapt. In February 2025, the first provisions came into effect, notably the ban on AI systems with unacceptable risk. This means that from February 2, 2025, practices such as social scoring by governments or other AI applications that seriously violate fundamental rights are explicitly prohibited. This direct ban illustrates the 'risk-based' approach of the law: applications with unacceptable risks are not tolerated.
As of August 2, 2025, new requirements came into force for general-purpose AI models โ the so-called General Purpose AI (GPAI) or foundation models. Providers of such broad AI models (for example large language models or generative AI systems) must since then comply with stricter requirements. Specifically, these involve transparency obligations and technical precautionary measures: they must prepare extensive technical documentation, ensure their models do not cause copyright infringements, and provide summaries of the training data used. They must also test their AI models before launch for bias, toxic content, and robustness.
For the most advanced models with potential "systemic risk," additional obligations apply, such as conducting risk evaluations, adversarial testing, reporting serious incidents to the European Commission, and providing information about the model's energy consumption. These obligations formally came into force in August 2025, although the law has built in that enforcement and sanctions for these components only start from August 2026. This created a kind of grace period: model developers must already comply with the rules, but supervisors may show leniency until 2026.
Interim conclusion: In 2025, two important components started: (1) the ban on certain AI applications (such as social credit systems) to protect citizens, and (2) the first duty of care for developers of generic AI models to ensure transparency and safety. All this happens in line with the phased schedule agreed upon at approval: as the risk of AI applications is higher, the corresponding obligations come into effect later, with full application of the AI Act planned by 2027.
Political Milestones and Negotiations in 2025
Although formal trilogue negotiations already led to an agreement in late 2023 (on December 9, 2023, the European Parliament and Council reached a compromise on the final text) and the law was approved by Parliament in March 2024 and by the Council in May 2024, 2025 was not quiet on the political front. On the contrary, the year saw important discussions about implementation and possible adjustment of the AI Act.
Resistance and Calls for Pause
From spring 2025, criticism emerged from the business community and some politicians that implementation was too fast and too complex. In June 2025, the European tech lobby (CCIA Europe, with members like Google, Meta, and Apple) called for a pause in the implementation of the AI Act. They warned that a hasty rollout could harm Europe's AI ambitions.
Shortly thereafter, in early July 2025, a group of 45 major European companies โ including names from various sectors such as Airbus, ASML, Lufthansa, Mercedes-Benz, and Siemens โ published an open letter to the European Commission requesting to "stop the clock for two years" for the heaviest obligations. In it, they expressed concerns about lack of clarity and high compliance costs. They also pointed to the absence of important implementation guidelines at that time: the AI Code of Practice that should have been ready by May 2, 2025, had not yet been published. The companies requested a two-year delay for both the rules around high-risk AI systems (planned for 2026) and the new rules for generic AI models (from 2025), to first complete the necessary guidelines and standards.
Some political leaders supported this call. Swedish Prime Minister Ulf Kristersson called the EU AI rules "confusing" and also advocated for a pause in June 2025. This criticism came at a sensitive moment: the EU wants to lead with AI regulation but must also consider competitive position and cooperation with partners like the US. In the second half of 2025, pressure from the United States was added: the new American administration (President Trump, from 2025) put pressure on the EU to reconsider "too strict" parts, with threats of trade tensions.
European Commission Response
The European Commission initially held to the planned timeline. In mid-2025, a Commission spokesperson stated that there would be no general pause and that deadlines (such as August 2, 2025) remained unchanged. Commissioner (digital portfolio holder) Henna Virkkunen emphasized in the European Parliament that she wants to implement the AI Act "in an innovation-friendly manner" but did not want to consider a temporary halt.
However, the Commission acknowledged that flexibility was possible: targeted slowing of pace would be considered "if crucial standards or guidelines are not ready on time". Indeed, the Commission proved willing to postpone publication of implementation guidelines somewhat. The mentioned Code of Practice for GPAI models, intended as a guide for AI developers, was delayed by a few months and eventually appeared on July 10, 2025 instead of May. The Commission also announced that the European AI Board (the new EU-wide cooperation body of supervisors) would decide how quickly this code of conduct would be deployed โ implementation by the end of 2025 was considered, half a year later than originally planned.
Adjustments and "Simplification" Proposal
Toward the end of 2025, voices within the Commission called for targeted adjustments to the AI Act, partly in the context of broader digital agreements with the US. In November 2025, the Financial Times reported that the Commission was preparing a "simplification procedure" to ease or postpone parts of the AI regulation.
Under these plans, for example, enforcement for violations of high-risk AI systems would become less strict and direct: companies that violate the rules would first be given one year to implement improvements before sanctions follow. It was also mentioned that fines for violation of transparency obligations would only be imposed from 2027 instead of directly in 2026. Also notable was the idea to change the supervisory structure: one central European supervisor would take over part of enforcement. This would be a break with the current model in which each member state designates its own independent AI supervisor.
Political Balancing Act
In short, 2025 was anything but quiet politically: while the AI Act was formally already in implementation, a debate raged about the pace and weight of that implementation. The European Commission balanced between holding on to the "first in the world" AI rules and addressing concerns that the EU would disadvantage itself compared to other regions. Concrete adjustments were not yet fixed at the end of 2025 โ it was then clear that further negotiations in 2026 would determine if, and how, the AI Act would be adjusted in details. The Commission did reassure that it remains fully behind the goals of the AI Act, even if there are pragmatic delays or simplifications.
Reactions from Member States and National Implementation (Focus on the Netherlands)
The AI Act is an EU regulation and applies directly in all member states, but countries must make practical preparations: designate national supervisors, set up enforcement mechanisms, and possibly make additional regulations for things like sanctions. According to the law, all member states had to designate and announce their competent authorities for the AI Act by August 2025 at the latest, and establish rules for national fines and penalties and report them to Brussels. This led to discussions in many countries about who would become that supervisor and how tasks would be divided.
Dutch Preparation and Supervision
In the Netherlands, it became clear early on that the Dutch Data Protection Authority (AP) will play a central role in AI supervision. The AI Act requires supervision of both compliance with technical requirements and protection of fundamental rights. The Netherlands already had a call for sharper supervision of algorithms after the childcare benefits scandal. The AP, together with the National Inspectorate for Digital Infrastructure (RDI) โ the supervisor for digital product safety โ urged the government in 2024 for clear task division for AI supervision.
Although formally still to be decided in 2025, it was obvious that the AP would become the primary AI supervisor. The AP prepared for this and received extra budget in 2025 to build capacity. (However, the privacy watchdog warned that this "extra" budget was actually insufficient given the scope of the new supervision.) A possible EU plan to introduce a central European supervisory body was viewed with suspicion in the Netherlands, as this could marginalize the AP's role again.
At the same time, the Netherlands took steps to make government use of AI more transparent. The Dutch Data Protection Authority advocated in July 2025 for mandatory algorithm registration for all government agencies. According to the AP, Dutch governments are lagging in tracking and reporting their AI systems. The supervisor argued that such algorithm registers are needed alongside the European database being set up under the AI Act for high-risk AI systems. This plea resulted in further attention to the national Algorithm Register โ a platform where organizations can share information about their algorithms.
The Netherlands already had a pioneering role in setting up an Algorithm Register in the government context. In August 2025, immediately after the GPAI rules came into force, the Ministry of Digital Affairs announced new tools to help organizations comply with the AI Act. Practical tools have been made available through the algorithm register, including a guide to determine whether a technology falls under the AI definition, a factsheet for executives in the public sector about the AI Act, and material to promote AI literacy. The register was also technically improved (e.g., with exportable checklists for requirements) to make compliance easier. These steps show that the Netherlands is committed to knowledge sharing and support in implementing the AI Act, in addition to formally designating supervisors.
The general attitude of the Netherlands toward the AI Act can be described as positive-critical. At the entry into force in August 2024, Minister Micky Adriaansens (Economic Affairs) and State Secretary Alexandra van Huffelen (Digitalization) expressed their support for the EU rules, emphasizing that they offer the right balance between opportunities and risks of AI. The Netherlands embraces the economic potential of AI but also wants to ensure that AI systems are reliable and verifiable. This line continues in 2025: thinking along about feasibility (hence support for possible simplification), but also investing in a strong national enforcement structure.
Other Member States
Other member states have gone through similar processes. Many countries link AI supervision to existing authorities (e.g., data authorities or market supervisors) and form interdisciplinary teams. In Germany, for example, there is talk of an AI supervisor within the Bundesnetzagentur, and in France, the CNIL (data protection authority) will play an important role. Member states exchange information through the new European AI Board to promote consistent application. The European Commission also established a European AI Office in 2025 that must support national authorities and coordinate joint investigations. This AI Office, operational from 2024/2025, is comparable to how the European Data Protection Board functions under the GDPR.
Business and Tech Sector Reactions in 2025
The technology sector made itself clearly heard about the AI Act in 2025. As described above, large European and international companies pushed for slower or adjusted implementation, fearing loss of innovation capacity and competitiveness. These concerns arose from the complexity of the regulations and uncertainty about practical implementation.
An Amazon Web Services survey among European companies indicated that more than two-thirds of companies struggle to understand their responsibilities under the AI Act. There was uncertainty about questions like: Does my AI tool fall under "high risk"? Am I a provider or just a user? How do I prove compliance? Many companies โ particularly startups and SMEs โ fear a heavy administrative burden before they can bring AI applications to market.
Constructive Collaboration Through Codes of Conduct
At the same time, the business community also showed itself constructive. When it became clear that the basic rules would proceed, various major AI providers cooperated with the EU on self-regulation through a Code of Conduct. In July 2025, the Commission presented the General Purpose AI Code of Practice, a voluntary code of conduct that helps developers comply with AI Act obligations regarding transparency, safety, and copyright.
This code was developed in a multi-stakeholder process and aligned with the AI Act. Major players such as Google, Microsoft, IBM, OpenAI, Anthropic, Amazon, and European AI companies (e.g., Mistral AI, Aleph Alpha) immediately joined as signatories. By following the code, they can demonstrate compliance, which lightens the burden of proof and provides more legal certainty. This initiative shows that the sector is willing to take steps toward responsible AI even before all legal obligations are enforceable.
Startups and Open Source
In addition to major players, the European startup scene also made itself heard. Some AI startups fear that heavy obligations disproportionately affect them and called for customization or exemptions. The AI Act does contain relaxations for research activities and open-source components (which are largely exempt from compliance requirements), but for commercial startups it remains a challenge. Industry organizations emphasized that the EU should not stifle the innovative climate and urged clear standards and sandboxes to experiment without immediate enforcement risk.
Sector-Specific Preparation
In sensitive sectors โ such as HR, finance, and healthcare โ companies are preparing intensively in 2025 for upcoming high-risk obligations. Large employers have examined their HR algorithms, knowing that recruitment and assessment systems will soon be classified as high risk. A start has been made with impact assessments and setting up internal AI governance structures, driven by fines that can amount to 6-7% of global turnover in case of non-compliance.
In summary, the tech sector responded in 2025 along two tracks: critical where necessary โ with lobby letters and public warnings about lack of clarity โ but also proactive and thoughtful through voluntary codes and compliance preparation. This dual attitude has influenced the political discussion (after all, room was created for phased enforcement) and will remain important in 2026.
Implementation and Interpretation Challenges
A central theme in 2025 was the interpretation and practical implementation of the AI Act. The regulation is very extensive and new, leading to interpretation questions. Some prominent points of attention and concerns:
Definition of AI and Scope
What exactly falls under an "AI system" according to the law? The definition is deliberately technology-neutral and broadly formulated, but this causes doubt among developers whether a particular software algorithm falls under AI Act obligations. To help with this, the Netherlands developed a decision guide (see Algorithm Register tools). The European Commission also published guidelines during 2025 to clarify key concepts. On July 18, 2025, guidelines on the scope of obligations for GPAI providers appeared, explaining in clear language which models and use situations fall under the new rules.
Overlap with Existing Legislation
Companies and lawyers struggled with how the AI Act relates to existing rules like the GDPR (privacy) or product safety directives. For example, there is discussion whether certain AI decisions fall under both the AI Act and the GDPR profiling prohibitions. The European Commission has indicated awareness of consistency in digital rulebooks and is working on a "simplification" that may remove overlaps. Initiatives have also been started to integrate AI Act obligations with standardization: technical standards are being developed (via CEN/CENELEC and ISO) so manufacturers can demonstrate their AI is "state of the art" safe โ but in 2025 these standards were still in development, creating uncertainty for manufacturers.
Supervisory Capacity
Both at EU level and in member states, there is concern whether supervisors have sufficient expertise and manpower to enforce the AI Act. National authorities such as the AP are expanding their AI teams, but acknowledge that supervision of AI systems is complex (due to technical content and required understanding of context). The AI Act provides for a European AI Board that must ensure consistency, but how effective this will be remains to be seen. At the end of 2025, some watchdogs already warned that without sufficient resources, the beautiful paper law could become a toothless tiger.
International Context and "Brussels Effect"
The AI Act is much stricter than, for example, the current approach in the US (voluntary AI principles) and flexible guidelines in Asia. A concern was whether the EU is not getting too far ahead and putting European companies at a disadvantage. At the same time, European policymakers hope for a "Brussels Effect" where others adopt our rules. In 2025, however, it appeared that major economies are going their own way: the US actually focused on fewer rules under Trump, the UK followed a pro-innovation path, and only a few countries like Canada, Brazil, and Peru showed interest in similar AI legislation. This limited geopolitical support raised questions about feasibility: if AI systems are developed worldwide, how does the EU prevent rules from being circumvented via other jurisdictions? This discussion continues and may lead in 2026 to more intensive diplomatic consultation or cooperation in forums like the G7 and OECD to reach more common AI principles.
Government Use of AI
In addition to companies, government organizations are also subject to the AI Act (e.g., when using AI in police, justice, or social services). Concerns have been raised by civil rights organizations about how governments will interpret the law. For example: the Act prohibits real-time biometric identification in public spaces, but with exceptions for law enforcement. Where is the boundary? In the Netherlands, the AP warned in 2025 about the rise of AI systems that recognize emotions and called for caution in their deployment. This shows that implementation is not only a technical matter but also requires ethical and legal interpretation. The European Commission and AI Board are expected to publish additional guidance on this, and supervisors will need to exchange best practices.
Lessons Learned
In short, 2025 was a year of learning and interpreting: policymakers, businesses, and supervisors alike tried to get a grip on the new AI rules. Significant progress has been made in concretizing obligations (through codes, guidelines, etc.), but attention points remain such as sufficient clarity and capacity. These lessons learned in 2025 form the prelude to a crucial 2026, in which theory must really be put into practice.
Outlook for 2026: Deadlines and Next Steps
The year 2026 will be decisive for the actual application of the AI Act. A number of major milestones are on the agenda:
February 2026 โ Further Guidelines
By February 2, 2026, the European Commission must provide additional guidelines on risk management and monitoring (specifically on Article 6 of the AI Act). These guidelines will, for example, clarify how AI providers must practically implement their post-market monitoring (tracking AI performance after introduction). This is important to provide certainty before the high-risk obligations come into force.
August 2026 โ High-Risk AI Obligations in Force
August 2, 2026 is the date when the bulk of AI Act obligations will apply. From that moment, all high-risk AI systems newly placed on the market must comply with extensive requirements: think of conformity assessments, risk management systems, transparency toward users, and human oversight measures. The transparency obligations for limited-risk AI systems (such as labeling deepfakes or notifying that one is dealing with a chatbot) will also apply. Companies using AI applications in, for example, recruitment & selection, credit provision, education, or public services must be fully compliant by this date.
Practically, this means that 2026 will be dominated by audits and implementation projects within organizations to have everything in order before the deadline.
August 2026 โ Enforcement and Existing Systems
From August 2026, supervisors will also formally have the authority to enforce the GPAI rules that already applied in 2025. Additionally, a transitional arrangement starts: existing AI systems that were already in use before entry into force also fall under the law if they are still significantly modified after August 2, 2026. This prevents old AI systems from running indefinitely to avoid the rules. Providers and users of such legacy AI would do well to use 2026 to implement updates so these systems are compliant, or make plans for replacement.
National AI Sandboxes Operational
Member states must have at least one AI testing ground (regulatory sandbox) set up by August 2, 2026. These sandboxes provide a controlled environment where AI developers can test innovative systems in consultation with supervisors, without immediately having to comply with all strict rules. In 2025, many countries have already started preparations for this. In the Netherlands, for example, the Dutch AI Coalition is involved in exploring sandbox models. In 2026, these sandboxes will become operational, which can be an important learning moment to test the practical manageability of the law and adjust where necessary.
Possible Adjustment of Timelines
As discussed, the Commission was considering a targeted delay for some components (e.g., high-risk obligations) at the end of 2025. Should this plan gain political support in 2026, it could mean that certain components only come into force in August 2027 instead of 2026. The focus will particularly be on the first half of 2026: under the incoming EU presidency, it will be discussed whether the "simplification" proposals will be adopted. On November 19, 2025, a decision was on the Commission's agenda; the outcome will be further developed in 2026.
For now, however, organizations should assume that August 2026 remains the moment when the AI Act gets full teeth. The European Commission has made clear that it stands behind the law and wants to roll it out "according to the legal timeline". Only targeted delay (e.g., for sanctions or specific rules) is possible, but that will only be definitively decided in 2026.
More Guidance and Standardization
During 2026, we expect additional implementing acts, standards, and Q&As from the EU. Think of template forms for risk assessment, or harmonized standards for certain technical requirements (for example, for dataset documentation or accuracy tests). At the end of 2025, consultations started, such as on protocols for copyright "opt-outs" in training data and on procedures for reporting serious incidents from 2026. The results will become visible in 2026 in more concrete guidelines that companies can follow.
First Tests and Enforcement Cases
By the end of 2026, it could well be that the first enforcement actions take place. Supervisors will probably still mainly act supportively in 2026 (information, warnings), but after August 2026, fines can be issued for flagrant non-compliance. This is something particularly the major technology companies are taking into account: just like with the GDPR, the EU could choose a few high-profile cases to set a precedent. Conceivable, for example, is an inspection of AI systems in the HR sector or an investigation into generative AI services that are insufficiently transparent. At the same time, 2026 remains a year of cooperation: the AI Act provides for peer reviews and consultation between national supervisors, so maximum penalties will not be imposed without first seeking alignment.
Conclusion
In 2026, the EU AI Act reaches its full effect, unless a last-minute delay is decided. Organizations have heard the warning shots in 2025 and must use 2026 as a "compliance sprint." The legislative advances of 2025 โ such as codes of conduct and guidelines โ provide a framework, but the real test comes when the rules are applied on a broad scale. Important dates like August 2026 serve as a benchmark: they will show to what extent Europe's ambitious AI regime is viable in practice.
Moreover, 2026 will make clear whether the EU remains alone in this or whether international lines converge. One thing is certain: developments around the AI Act will continue unabated in 2026, with potentially further fine-tuning of the rules and their interpretation. We continue to follow this space closely and will keep you informed of the latest insights and obligations regarding AI regulation in Europe.
๐ Deepen Your Knowledge: Check out the Complete EU AI Act Guide for a full overview of all aspects of AI legislation.