Responsible AI Platform

Parliament Committees Vote to Postpone High-Risk AI Rules: What This Means for Your Organisation

ยทยท7 min read

Vote result 19 March 2026: The IMCO and LIBE committees of the European Parliament jointly adopted their position on the AI Act Omnibus simplification: 101 in favour, 9 against, 8 abstentions. The plenary vote in the full Parliament is scheduled for 26 March 2026. After that, negotiations with the Council begin.

For many compliance teams, one date has been fixed on the calendar for months: 2 August 2026. That was when the rules for high-risk AI systems listed in Annex III of the AI Act were set to apply. On 19 March 2026, the IMCO and LIBE committees of the European Parliament voted in favour of a position that shifts that date considerably further into the future. Understandably, organisations are now asking whether they can press pause.

The short answer is no. But the nuance is genuinely relevant to how you prioritise your work over the next eighteen months.


What was actually voted on

The committees adopted a position that introduces two separate postponement deadlines for high-risk AI.

The first category covers systems listed in Annex III of the AI Act: biometric identification, critical infrastructure, education, employment, essential services, law enforcement, the justice system, and border management. Obligations for these systems shift from 2 August 2026 to 2 December 2027, an extension of just over sixteen months.

The second category applies to AI systems that are already regulated by existing EU sectoral safety legislation, such as medical devices, radio equipment, and toys. These get even more time: until 2 August 2028. The reasoning is that companies already subject to demanding sectoral regimes should not have to simultaneously implement all AI Act obligations in full. The AI Act obligations may also be less stringent for products already comprehensively governed by sectoral law.

Co-rapporteur Arba Kokalari (EPP, Sweden) put it plainly: companies now need clarity on whether they are high-risk or not. That is indeed the core of the problem. The delay is partly a consequence of the Commission publishing its guidance on high-risk classification late, leaving organisations working for months with incomplete information.


Which rules already apply

It is essential to understand what this postponement does not touch. Two categories of obligation are already in force and are not delayed by the Omnibus vote.

Article 4 of the AI Act, the AI literacy obligation, applied from 2 February 2025. Organisations that provide or deploy AI systems are already required to ensure that their staff have sufficient knowledge, skills, and understanding of AI to use systems responsibly. This is not a paper obligation: supervisory authorities can already enforce it.

Article 5, the list of prohibited practices, also applies now. Systems that pose unacceptable risks are banned: manipulative techniques that exploit vulnerabilities of individuals, social scoring systems by public authorities, real-time remote biometric identification in public spaces with limited exceptions, and systems for cognitive behavioural manipulation. The Omnibus adds a new prohibition to this list: so-called nudifier applications, meaning AI systems that create or manipulate sexually explicit images of identifiable real persons without their consent. Parliament included an explicit exception for systems with effective safety measures that prevent the generation of such images, but the baseline rule is clear: such applications are prohibited.

Co-rapporteur Michael McNamara (Renew, Ireland) noted that he was glad the compromise reached a majority and that the ban on nudification apps was part of it.


Who is affected by which date

The two new deadlines do not apply equally to all organisations, and it is worth being clear about which timeline applies to your situation.

If you provide or deploy an AI system falling under one of the areas covered by Annex III, such as a system for selecting job applicants, a credit-scoring system, or a system used in healthcare, your new deadline is 2 December 2027. That is when the full set of high-risk obligations, including risk management, technical documentation, logging, transparency, and human oversight, becomes fully applicable.

If your system falls under an existing EU sectoral regime, such as the MDR for medical devices or the RED for radio equipment, your deadline is 2 August 2028. And your AI Act obligations may be less extensive than for other high-risk systems, given that the sectoral legislation already provides many safeguards.

For organisations in both categories, the extended timeline does not mean that documentation and risk management can be deferred until 2027 or 2028. The consistent message from regulators is that investing now in internal governance, risk assessments, and technical documentation will put you in a far stronger position than attempting to do everything at the last moment.


Watermarking: less time than the Commission proposed

Another element of the Omnibus vote deserves attention, particularly for organisations that produce AI-generated content. The obligation to label AI-generated content, also known as watermarking or synthetic content marking, is also being adjusted.

The European Commission had proposed giving providers until 2 February 2027. Parliament chose a shorter deadline: 2 November 2026. This date is earlier than the Commission intended. For organisations that generate content using AI and publish that content publicly, this is a specific point worth noting. It is not a new prohibition but an adjustment to an existing obligation, and the date now sits closer than some may have assumed.


Processing personal data for bias correction

The Omnibus also introduces a new explicit legal basis for AI system providers: they may process personal data in order to detect and correct discriminatory bias in their systems. This sounds straightforward, but was legally unclear until now, particularly when it comes to special categories of personal data. The Omnibus establishes strict safeguards for this, but the foundational permission is in place.

For teams responsible for both AI governance and data protection, this is a useful opening. Organising bias audits becomes more legally defensible without having to navigate a persistent gap in lawful grounds.


What to do now in practice

The vote on 19 March is a committee position, not adopted law. Parliament votes in plenary on 26 March, and after that trilogue negotiations with the Council of the EU begin. The final text may still change. The direction is however clear, and for practical planning purposes the dates now known are the most realistic starting point.

For organisations that were preparing for August 2026, the practical implications are as follows. Do not stop preparing. The classification question, determining whether your system is high-risk, remains on the agenda and does not become simpler over time. The sooner you answer it, the better you understand which investments are genuinely necessary.

Use the additional time to do more thorough work, not to start later. Organisations that now build their risk management, technical documentation, and internal processes have time to test, refine, and embed them. Those who start in late 2027 do not have that space.

Keep monitoring the legislative process. The trilogue with the Council may adjust the precise dates, the scope of exceptions, and the wording of obligations. Follow developments actively so you are not surprised by a final text that diverges from the March committee position.

And finally: the prohibition rules and the AI literacy obligation already apply. If your organisation is not yet fully compliant with those, that is the most urgent action to take, regardless of what happens with the Omnibus.


LearnWize

7 days free

All-in-one platform for AI Act compliance: mandatory AI literacy, risk tools and weekly updates.

Start free trial โ†’