Digital Omnibus: What Survived the Leak and What Did Brussels Adjust?

9 min read
Dutch version not available

The Digital Omnibus is intended to streamline Europe's fragmented digital rulebook. In one package, amendments are made to the GDPR, e-Privacy rules, Data Act, and AI Act, among others. When a draft version leaked, organizations like noyb, EDRi, and ICCL raised alarms. They warned of a creeping erosion of data protection under the guise of simplification.

On November 19, the European Commission published the official proposals for the Digital Omnibus. This makes one question particularly interesting, especially for lawyers, DPOs, and AI governance teams: to what extent was the leak accurate, and where has the Commission adjusted the plan?

In this blog, I walk through the key points: what the Digital Omnibus is, what was in the leaked version, what's actually in the official text, and what this means for organizations already working extensively with GDPR, Data Act, and AI Act.


What is the Digital Omnibus exactly?

The Digital Omnibus is not an entirely new regulation, but a package of amendments to existing laws. It mainly concerns:

  • the GDPR, for everything related to personal data and fundamental rights
  • the e-Privacy rules, for communication privacy and cookies
  • the Data Act and Data Governance Act, for data sharing and access to data
  • the AI Act, for risk-based regulation of AI systems
  • and links with cybersecurity frameworks like NIS2 and DORA

The Commission's official line is clear: digital legislation has grown strongly in a short time, with overlap and friction. The Digital Omnibus should harmonize definitions, reduce administrative burdens, and better align incident processes.

That promise sounds logical. At the same time, tension arises as soon as simplification leads to new exceptions, broader legal bases, or longer transition periods. The leaked text showed exactly that picture.


What was in the leaked Digital Omnibus?

The leak gave a fairly complete picture of the direction the Commission was thinking. The main elements:

1. GDPR more "relative" and friendlier to AI

In the leaked version, the definition of "personal data" was clearly explained in relativistic terms: no longer primarily the question of whether someone is identifiable in absolute terms, but whether a specific party can identify a person. This shifts the boundary toward what companies often already claim in practice: that datasets are "not personal data for us."

Additionally, there were passages that explicitly made room to use personal data for AI training based on legitimate interest. Combined with a broader concept of automated decision-making and less stringent information obligations, this seemed like a significant shift in favor of developers.

Most sensitive were proposals to narrow the category of special categories of personal data. Only data that directly shows a sensitive characteristic would still fall under this. Information from which sensitive characteristics are derived, such as patterns from search behavior or location data, would fall outside this special protection according to the leak.

2. Data Act, DGA and incidents: merge and simplify

Also in the area of data sharing and incidents, the leak gave a clear picture. The Data Governance Act would largely be integrated into the Data Act, with a more limited role for government access to business data. Incident reporting for GDPR, NIS2, DORA, and related frameworks would be pulled more toward one central reporting point, with longer timelines and a sharper focus on serious incidents.

3. AI Act: delayed enforcement and exceptions for high-risk systems

Finally, the leaked text showed that the Commission was seriously considering postponing parts of the AI Act. The emphasis was on:

  • a later entry date for the strictest obligations for high-risk AI
  • exceptions for systems that only perform "narrow" or purely procedural tasks
  • extra time for obligations around labeling and watermarking

Civil rights organizations summarized this as a package that mainly offers comfort to large players and developers, while protection for citizens is postponed.


What's actually in the official Digital Omnibus?

The November 19 publication shows that the leak correctly captured the main lines. At the same time, a few sharp edges have been smoothed under pressure from criticism.

GDPR in the Digital Omnibus: confirmation of direction

In the official documents, the movement toward a more relative approach to personal data remains visible. Identifiability is explicitly placed in context. For organizations that have long reasoned in terms of "pseudonymous data" and "practical identifiability," this feels like a legal anchoring of practice.

Also in the area of AI and GDPR, the direction is confirmed. The Digital Omnibus introduces an explicit pathway to use personal data for:

  • developing and training AI models
  • testing for bias and quality
  • improving existing models

Legitimate interest is designated as the legal basis, with additional conditions and an explicit right to object for data subjects. The leak was therefore correct that a new route is being opened to legally justify AI training.

Additionally, the rules around DSARs, data breach notifications, and cookies are being recalibrated. The official proposal retains the core of the leaked text:

  • more room to refuse requests or handle them for a fee in cases of clear abuse
  • a longer timeline and more centralized approach for data breach notifications, focused on incidents with real risk
  • exceptions for certain measurement and security cookies, aimed at reducing useless cookie banners

For many organizations, this will sound familiar and attractive, especially for large platforms and digital service providers.

Data Act, DGA and incident management

The Digital Omnibus actually incorporates the Data Governance Act into an updated Data Act. The scope for data demands by governments is defined more narrowly, with emphasis on serious situations and emergencies. This also aligns with the leak.

In the incident management area, you see that the Commission strongly emphasizes a more uniform reporting structure across different frameworks. For organizations that currently have a different process for each regime, this could save work in the long run. The downside is that the threshold for reporting becomes higher, so some events remain out of sight.

AI Act: delay and relief elaborated

The Digital Omnibus on AI Regulation, the sister package that focuses on adjustments to the AI Act, confirms the main line from the leak. The strong obligations for high-risk systems are linked to the availability of harmonized standards and tools, pushing the practical entry date to 2027 or 2028.

Additionally, the exemption from registration in the EU database for certain high-risk systems is elaborated. Systems that only perform supporting or procedural tasks do not have to be in the central database under certain conditions. The core of this idea was already in the leaked text and is now anchored in more detail.


Where did the Commission really adjust after the leak?

The criticism from noyb, EDRi, ICCL, and other parties did not prevent everything, but did force a few essential adjustments.

1. Special categories of personal data remain broadly protected

The proposal to limit the scope of special categories of data to directly sensitive information has disappeared from the official text. This is a clear step back from the leak.

Instead, the current broad approach from the GDPR remains the starting point. Inferences and profiles that say something about health, political preference, religion, or sexual orientation remain under stricter rules. The Commission therefore chooses not to break open the foundation of Article 9 here, but to create a targeted exception for AI under strict conditions.

2. AI training based on legitimate interest with extra safeguards

Where the leak still gave the impression of almost unlimited space, the official text is formulated somewhat more cautiously. There is indeed a route to bring AI training under legitimate interest, but:

  • it is explicitly stated that data subjects retain an effective right to object
  • other legislation can continue to require consent
  • organizations must solidly justify necessity and proportionality

This will not end the discussion, but it makes the picture more nuanced than the first comments on the leak suggested.

3. AI Act: no vague "stop the clock," but concrete shift

Instead of an open end, as still stated in the notes from the leak, the official Digital Omnibus on AI Regulation contains concrete dates and links to standards and guidance. For practice, the effect is comparable, only legally more neatly worked out.

The message remains that providers of high-risk AI get more time, and that certain categories of systems fall under lighter obligations if they mainly perform supporting tasks.


What does this mean for organizations investing in AI governance?

For those seriously engaged with AI governance and data protection, the Digital Omnibus gives a dual signal.

On one hand, part of the complexity is being addressed. Data breach notifications, incident processes, and definitions are moving closer together. The link between GDPR and AI Act becomes clearer, especially around AI training and risk assessment.

On the other hand, the playing field becomes more dynamic. The bar in the area of AI training and high-risk AI seems to be lowered for some parties, while organizations that have already invested early in strict interpretations wonder whether they are suffering competitive disadvantage.

For lawyers and DPOs, it comes down to a few strategic choices:

  • Do you choose the minimum legal framework offered in the Digital Omnibus, or do you establish a higher internal standard for AI training, profiling, and use of sensitive data?
  • How do you deal with the tension between longer transition periods in the AI Act and the expectation of customers and supervisors that systems are already responsible and explainable now?
  • What role does the ethical side of AI use get in your organization, apart from what is strictly legally permitted?

Three concrete steps for the coming months

To conclude, three steps you can already prepare as a lawyer, DPO, or AI governance lead based on the Digital Omnibus:

1. Create an overview of all AI training use cases in your organization

Map which datasets are used, which legal bases are currently invoked, and how sensitive the data is. Use that overview to determine whether you want to use the new AI route under legitimate interest or not.

2. Review your incident and reporting process with the Digital Omnibus in mind

Look at the interplay between data breaches, AI incidents, and cybersecurity notifications. An integrated approach will help later when the central reporting structure takes further shape.

3. Use the discussion around the leak as a conversation starter in the boardroom

The tension between protection of fundamental rights and space for AI innovation has not disappeared with the Digital Omnibus, but has shifted. Show that you know the official text, but also explain which choices you consider wise, precisely on points where the law now offers more space.

The core point is that the leak was not a phantom. The Digital Omnibus confirms large parts of the direction that was visible then, with one clear boundary that Brussels did not dare to cross: hollowing out the protection of special categories of personal data. For everything else, the initiative now lies with organizations themselves to determine which standard they want to maintain in a time when data, AI, and trust are increasingly intertwined.