On 19 November 2025, the European Commission published the Digital Omnibus on AI proposal β a package designed to "simplify" and make the AI Act "more proportionate." Barely fifteen months after the AI Act entered into force, the Commission is already proposing amendments on multiple fronts. The proposal affects AI literacy requirements, registration obligations, deadlines for high-risk systems, and the processing of special category personal data. This article provides a comprehensive overview: from the political background to the specific changes, the key reactions, and what this means for organisations currently working on AI Act compliance.
The political backdrop: why now?
The ink on the AI Act was barely dry when the first cracks appeared. Not in the law itself, but in the political landscape surrounding it.
In September 2024, Mario Draghi presented his now-famous report on European competitiveness. The message was unsparing: the EU is falling further behind the US and China, particularly in advanced technologies. Regulation was seen by more than 60% of EU companies as an obstacle to investment, with 55% of SMEs flagging regulatory obstacles as their greatest challenge. The Draghi report became the intellectual foundation for a broader deregulation agenda.
Simultaneously, major tech companies β Meta, Amazon, Apple and others β launched an aggressive lobbying campaign. Their message: the AI Act "threatens innovation" and is "too expensive" to comply with. The Trump administration added external pressure through the American AI Action Plan, which explicitly called for removing "red tape" and pressured the EU to relax digital rules.
π‘ Key point The Digital Omnibus proposal was not born out of technical necessity, but political pressure. The Draghi report, industry lobbying, and geopolitical tensions created a perfect storm for deregulation β before most AI Act obligations had even taken effect.
Internally, things weren't running smoothly either. The designation of national supervisory authorities was proceeding slowly, the development of harmonised standards by CEN-CENELEC was falling behind, and companies complained about having to comply with rules for which the practical tools were still missing. That last point β the absence of standards β was a legitimate concern. But the Commission leveraged it as a catalyst for something much broader than a deadline extension.
The proposal: what's on paper?
On 19 November 2025, the Commission presented its Digital Omnibus package as part of a broader Digital Package, alongside the Data Union Strategy and European Business Wallets. The package consists of two legislative proposals: a general Digital Omnibus (amending the GDPR, ePrivacy Directive, and NIS2 among others) and a specific Digital Omnibus on AI that amends the AI Act.
The ambition is significant: the Commission aims to reduce administrative burdens for businesses by at least 25%, and for SMEs by 35%, by the end of 2029. Expected savings: at least six billion euros.
But the devil, as always, is in the details. Here are the key changes:
1. AI literacy: from obligation to encouragement (Article 4)
The current AI Act requires all providers and deployers of AI systems to ensure their staff has sufficient AI literacy. This obligation has been in force since 2 February 2025. The Omnibus proposal scraps this obligation and shifts responsibility to the Commission and Member States, who must "encourage" providers and deployers to take measures.
In practice, this means: a hard, enforceable duty becomes a policy recommendation. Training for deployers of high-risk AI systems remains mandatory, but the broad foundation falls away.
2. Registration requirement deleted (Article 49)
Under the current law, providers of AI systems falling under Annex III β even if they conclude their system is not high-risk (via the Article 6(3) mechanism) β must still register in the EU database. The Omnibus proposal deletes Article 49(2) entirely.
Providers need only document their self-assessment and keep it available for supervisory authorities. Public registration, and with it public accountability, disappears.
3. Transparency obligations delayed (Article 50(2))
AI systems generating synthetic audio, images, video, or text must mark their output in a machine-readable format β think watermarks or metadata. The Omnibus proposal gives systems placed on the market before 2 August 2026 an additional six months, until 2 February 2027.
4. Special category data processing expanded (new Article 4a)
The current AI Act permits the use of special category personal data (such as ethnicity or health data) for bias detection in high-risk AI systems, provided this is "strictly necessary." The Omnibus proposal extends this to all AI systems and lowers the threshold from "strictly necessary" to "necessary."
5. Deferred deadlines for high-risk AI (Article 113)
This is perhaps the most impactful change. Obligations for high-risk AI systems are linked to the availability of harmonised standards and other compliance tools:
| Type of high-risk AI | Current deadline | Omnibus proposal | Latest date |
|---|---|---|---|
| Annex III systems | 2 August 2026 | 6 months after confirmation of standards availability | No later than 2 December 2027 (+16 months) |
| Annex I systems (regulated products) | 2 August 2027 | 12 months after confirmation of standards availability | No later than 2 August 2028 (+12 months) |
| AI content transparency (Art. 50(2)) | 2 August 2026 | Delay for pre-Aug 2026 systems | 2 February 2027 (+6 months) |
6. Conformity assessment: sectoral legislation takes precedence (Article 43)
For products falling under both sectoral legislation (such as medical devices) and the AI Act, providers must now follow the conformity assessment procedure of the sectoral legislation. AI Act requirements are integrated into it, rather than requiring two parallel assessments.
7. Centralisation of supervision at the AI Office
Supervision of AI systems based on general-purpose AI models (where the same provider develops both model and system) and systems integrated into very large online platforms (VLOPs/VLOSEs) is centralised at the Commission's AI Office.
8. Extended arrangements for SMEs and small mid-caps
Simplified compliance procedures previously available only to micro-enterprises are extended to all SMEs and small mid-cap companies (SMCs). This includes simplified technical documentation and proportionate penalties.
βοΈ What's not addressed The proposal conspicuously leaves much unaddressed. Morrison & Foerster highlights: the unclear definition of "provider" (Art. 3(3)), the overlap between the fundamental rights impact assessment (Art. 27) and the DPIA under the GDPR, the overly narrow research exemption (Art. 2(8)), and the lack of a genuine conformity guarantee for AI sandboxes. The risk of national gold-plating via Article 82 also remains.
The reactions: three camps
The EDPB and EDPS: "support, provided that..."
On 20 January 2026, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) published their Joint Opinion 1/2026. The tone is diplomatic but firm. They support the objective of simplification but raise concerns about virtually every concrete measure:
AI literacy: The supervisory authorities are "strongly against" converting the mandatory AI literacy obligation into a soft encouragement. AI literacy is crucial for understanding AI concepts, ethical and social awareness, and the protection of fundamental rights. New obligations for the Commission should complement, not replace, existing provider and deployer obligations.
Registration: The EDPB and EDPS advise against deleting the registration requirement. The change would "significantly undermine" provider accountability and create undesirable incentives to unduly claim exemptions. The projected savings are marginal and do not justify the loss of transparency.
Special category data: They acknowledge the importance of bias detection but insist on restoring the "strictly necessary" threshold and clear delimitation to situations where the risk of adverse effects is "sufficiently serious."
Deadlines: "Sincere concerns" about the delay, given the rapid evolution of the AI landscape. The co-legislators are called upon to maintain the original timeline for certain obligations β particularly transparency requirements.
Civil society: "historic rollback"
133 civil society organisations and trade unions signed a joint statement even before publication calling on the Commission to halt the Omnibus proposal. EDRi (European Digital Rights) called the proposal "a major rollback of EU digital protections." The Civil Liberties Union for Europe was even more direct: the Omnibus "gives Big Tech exactly what it wanted" and undermines the EU's position as a global leader in technology regulation.
Corporate Europe Observatory documented how specific amendments could be traced back to lobbying points of major tech companies. A particular point of criticism: the Commission did not carry out an impact assessment when drafting the proposal, while claiming the changes would have "no impact on fundamental rights" β precisely while fundamental rights protections were being weakened.
The Dutch government: critical but nuanced
On 12 December 2025, the Dutch cabinet published its BNC-fiche on the Omnibus AI and Omnibus Digital. The tone: supportive of the objective, critical of the execution.
The Netherlands recognises that reduced regulatory burden can benefit businesses, particularly SMEs. But the cabinet states that several changes would "substantially diminish" the level of data protection. The Hague specifically raises concerns about:
- Personal data for AI training: the expanded use of (sensitive) personal data conflicts with fundamental rights and goes further than necessary for burden reduction
- GDPR adjustments: relaxation of the "legitimate interest" processing ground and data breach notification weaken citizen protection
- Centralisation of cyber reporting: the Netherlands fears national reporting systems will be bypassed and sensitive information about critical infrastructure will end up at the European level
- Missing impact assessment: unclear what the proposals concretely deliver and what the consequences are
The cabinet wants "further clarity from the Commission" before reaching a definitive judgment.
π³π± Dutch position summarised The cabinet wants the omnibuses to "simplify, clarify, and streamline" without undermining the objectives of the legislation β protection of fundamental rights, safety, and privacy. A nuanced position that leaves room for negotiation but sets clear boundaries.
Where does the proposal stand now?
The Omnibus proposal follows the ordinary legislative procedure. Here's the expected timeline:
| Phase | Expected period | Status |
|---|---|---|
| Proposal publication | 19 November 2025 | β Completed |
| EP committee assignment (IMCO, ITRE, LIBE) | December 2025 | β Completed |
| EDPB/EDPS Joint Opinion | January 2026 | β Published (20 Jan 2026) |
| EP amendments and committee report | Q1 2026 | π In progress |
| Council position (general approach) | Q1 2026 | π Technical discussions |
| Trilogue negotiations | SpringβSummer 2026 | β³ Planned |
| Expected adoption | MidβQ3 2026 | β³ Subject to change |
An expedited procedure is possible (Rule 170 of the EP's Rules of Procedure), allowing the proposal to bypass the full committee stage and proceed directly to a plenary vote. This could enable adoption as early as Q1 2026, but significantly limits opportunities for amendments and stakeholder engagement.
In parallel, the Commission is working on a second phase: the Digital Fitness Check, a comprehensive "stress test" of the entire Digital Rulebook. Stakeholders can provide input until 11 March 2026.
What does this mean for organisations?
Here's where it gets practical. Whether the Omnibus proposal is adopted in its current form, weakened, or strengthened β organisations need to make decisions now.
Scenario 1: Omnibus is (largely) adopted
If adopted in Q2/Q3 2026, organisations with high-risk AI systems receive a maximum of 16 additional months (until December 2027 for Annex III systems). The AI literacy obligation becomes soft law, and the registration requirement for self-assessed non-high-risk systems disappears.
Scenario 2: Omnibus is significantly amended
The European Parliament and Council add stronger safeguards β for example, preserving registration and AI literacy obligations while accepting deferred deadlines. This is the most likely scenario.
Scenario 3: Omnibus stalls or fails
Political disagreements or electoral dynamics delay the process such that original deadlines remain in force. Unlikely, but not impossible.
π― Practical advice: plan on the current law
Compliance officers: assume the current AI Act applies. The Omnibus is a proposal, not law. Obligations around prohibited AI practices (since February 2025) and GPAI models (since August 2025) remain in full force. The expected deferral period for high-risk is no reason to pause compliance programmes β but it is a reason to phase them pragmatically.
- β AI literacy: continue investing, regardless of the Omnibus. The EDPB/EDPS support enforcement. Moreover, it's good risk management.
- β Registration: register your systems proactively. If the requirement disappears, you've lost nothing. If it doesn't, you're prepared.
- β High-risk compliance: start gap analyses and risk assessments now. Even with a 16-month deferral, implementation time is tight.
- β Documentation: the documentation requirement for self-assessed non-high-risk systems remains in all scenarios.
Analysis: simplification or weakening?
Let's be honest: the AI Act did have implementation problems. Missing standards, undesignated national authorities, delayed guidelines β these are real obstacles. Linking deadlines to the availability of standards is a defensible choice in itself.
But the proposal goes beyond pragmatic recalibration. Scrapping the AI literacy obligation is not simplification β it's a fundamental policy change. Removing the registration requirement for systems that are potentially high-risk undermines the transparency the entire AI Act was built upon. And lowering the threshold for processing special category data from "strictly necessary" to "necessary" is a subtle but meaningful difference that opens the door to broader use.
The core problem is that the Commission conflates two very different objectives: implementation support (more time, better standards, practical guidelines) and regulatory relief (fewer obligations, lower thresholds, less transparency). The former is legitimate and welcome. The latter is a political choice dressed up as technical simplification.
Morrison & Foerster puts it aptly: "If even the Commission and standardisation organisations fail to meet their own clarification goals and deadlines, how can the industry be expected to comply with often complex and unclear requirements?" That's a fair point. But the solution is better support, not less protection.
Gleiss Lutz emphasises: "The proposed amendments should not be seen as deregulation, but rather as concessions on a practical level." That's the optimistic reading. The pessimistic reading β and that of 133 civil society organisations β is that this marks the beginning of a systematic dismantling of Europe's digital rights framework.
The truth lies, as so often in Brussels, somewhere in the middle. The European Parliament has shown with previous Omnibus packages that it is willing to smooth rough edges. The chance of the proposal being adopted in its current form is small. But the direction is set, and that direction is: fewer obligations, more room for providers, longer transition periods.
Conclusion: vigilance is warranted
The Digital Omnibus proposal is not a disaster, but neither is it cause for relief. It addresses real implementation problems, but simultaneously packages substantial policy changes as "simplification." The coming months will be crucial: the European Parliament and the Council will determine whether the core of the AI Act β transparency, accountability, protection of fundamental rights β remains intact.
For organisations, the message is clear: don't wait for the Omnibus. The current AI Act is the law. The prohibitions are in force, the GPAI rules apply, and the high-risk deadlines are approaching β Omnibus or not. Use any potential deferral not as a reason to lean back, but as extra time to do it right.
As EDPB Chair Anu Talus put it: "Innovation and efficiency are crucial and can coexist with maintaining accountability of AI providers." That's not an impossible combination. It's precisely what the AI Act was designed for.
Want to make sure your organisation is compliant with the AI Act β regardless of what the Omnibus brings? Embed AI helps organisations with practical AI compliance, from gap analysis to implementation. Get in touch for a free consultation.