Responsible AI Platform

European Parliament votes on AI Act Omnibus: delays for high-risk AI and ban on nudifier apps

··6 min read

Parliament position adopted, not yet final law. On 26 March 2026 the European Parliament adopted its position on the AI Act Omnibus with 569 votes in favour, 45 against and 23 abstentions. This is the mandate with which Parliament enters trilogue - negotiations with the Council of the EU have not yet begun. The final text may differ from what is described here.

On 26 March 2026 the full European Parliament voted on the AI Act Omnibus, the simplification package proposed by the European Commission in November 2025 as part of its seventh omnibus initiative. The result was unambiguous: 569 members voted in favour, 45 against and 23 abstained. Following the committee vote of 19 March 2026, the Parliament confirmed its position on four themes that directly matter to organisations working with AI.


Two separate postponement timelines for high-risk AI

The most discussed element of the Omnibus is the shift in deadlines for high-risk AI systems. The text draws a distinction that has not always been clearly reported in public coverage.

On one side are systems listed in Annex III of the AI Act, the group most organisations have in mind when they think about high-risk AI. This includes biometric identification systems, applications for critical infrastructure, AI in education and employment, systems for essential services, law enforcement, justice, and border management. For all of these systems, the date on which high-risk obligations apply shifts from 2 August 2026 to 2 December 2027.

On the other side are AI systems that are already regulated by existing EU sectoral safety legislation. Medical devices, radio equipment, and toy safety are prominent examples. That category gets even more time: until 2 August 2028. The reasoning is that products already subject to comprehensive sector-specific regimes should not carry a double compliance burden. The Omnibus also provides that AI Act obligations for those products may be less stringent than for systems without equivalent sectoral regulation.

New application timeline after the plenary vote

High-risk AI - Annex III (biometrics, critical infrastructure, education, employment, essential services, law enforcement, justice, border management): obligations apply from 2 December 2027.

High-risk AI - EU sectoral legislation (medical devices, radio equipment, toy safety and comparable regimes): obligations apply from 2 August 2028.

Watermarking (AI-generated audio, image, video and text): providers must comply from 2 November 2026.

Prohibited practices - nudifier apps: applies upon entry into force of the final text.

It is essential to emphasise that obligations already in force are unaffected. The AI literacy requirement of Article 4 has applied since 2 February 2025. The prohibited practices listed in Article 5 are also already active. The Omnibus does not reopen those dates.


Nudifier apps: a new explicit prohibition

The most striking new element of the Omnibus is the explicit addition of so-called nudifier applications to the list of prohibited AI practices. These are systems that use AI to create or manipulate sexually explicit or intimate images resembling an identifiable real person without that person's consent. The fact that this prohibition is now inserted separately is a direct response to the growing problem of non-consensual intimate imagery, also known as deepfake pornography or NCII.

The text includes a targeted exception: providers whose systems have effective safety measures that actively prevent the creation of such images fall outside the prohibition. The bar for that exception has been deliberately set high. A general terms of service clause or moderation policy does not suffice. The measure must be technically effective.

For most organisations offering AI tools, this prohibition is not an operational surprise. Platforms providing generative image editing will need to assess their architecture and moderation design against this criterion. That is, however, a separate exercise from the broader high-risk obligations imposed elsewhere in the AI Act.


Watermarking: earlier than the Commission proposed

In its original Omnibus proposal, the European Commission had suggested giving providers of AI-generated content tools until 2 February 2027 to comply with the watermarking and labelling obligations of Article 50. Parliament chose a stricter deadline: 2 November 2026, more than three months earlier.

For organisations that generate and publicly distribute AI-produced text, audio, images or video, this is a point to incorporate into planning. Article 50 requires transparent labelling of synthetic content so that users can tell that the information was produced by AI. The Omnibus adjusts the date relative to the original regulation, but Parliament rejected the full degree of relief that the Commission had proposed on this point.


Bias correction and personal data

The Omnibus introduces a new explicit legal basis for AI system providers: they may process personal data, including special categories, to detect and correct discriminatory bias in their systems. Until now this was a legal grey area, particularly when sensitive data such as ethnicity, health status or religion is needed to identify bias in training datasets.

The Omnibus imposes strict safeguards on this processing. But the foundational authorisation is in place. This means that bias audits, which are already mandatory for high-risk AI systems, can now be carried out on a clearer legal footing. For teams who combine AI governance responsibilities with data protection tasks, this is a meaningful improvement in legal clarity.


Support extended to small mid-cap enterprises

The AI Act already contains specific support measures for small and medium-sized enterprises, including access to regulatory sandboxes and reduced administrative requirements. The Omnibus extends those measures to small mid-cap enterprises, a category larger than the classic SME definition but still considered relatively small scale. This is a practical acknowledgement that compliance burdens under the AI Act are not a challenge exclusive to the very smallest players.


What happens next: trilogue with the Council

The plenary vote on 26 March marks the beginning of the next phase, not the end of the legislative process. The Council of the EU has not yet adopted a final position on the AI Act Omnibus. Once it does, trilogue negotiations begin - the three-way discussions between Parliament, the Council and the European Commission.

That negotiation process may lead to adjustments in the dates, the scope of exceptions and the precise wording of obligations. Organisations building their planning around the dates now on the table should continue monitoring the trilogue outcome. The direction Parliament has taken provides a strong signal, but the definitive law is what counts. And it does not yet exist.

For compliance teams the message is unchanged from earlier this year: do not stop preparing. The classification question - whether a system is high-risk or not - needs to be answered early. Investing now in risk management, technical documentation and internal governance processes gives you time to test and refine them before the deadlines arrive. Waiting for the final text means losing that room.


Sources

European Parliament: AI Act: delayed application, ban on nudifier apps (26 March 2026)

⚖️ Referenced Legislation

❓ Frequently asked questions

What transparency obligations apply to AI?
Article 50 requires that users are informed when interacting with an AI system (such as chatbots), when content is AI-generated (deepfakes), and when emotion recognition or biometric categorisation is applied.
What is AI literacy under the AI Act?
Article 4 requires providers and deployers to ensure their staff has sufficient knowledge of AI systems. This includes understanding the functioning, risks, and limitations of the AI being used.
Which AI practices are prohibited under the AI Act?
Article 5 prohibits manipulative AI techniques, exploitation of vulnerabilities, social scoring by governments, real-time biometric identification in public spaces (with exceptions), and emotion recognition in the workplace and education.
On LearnWize:EU AI Act ComplianceTry it free

From risk classification to conformity assessment: learn it in 10 interactive modules.

Take the free AI challenge