Digital Omnibus: Simplification or Silent Erosion of Digital Rights?

21 min read
Dutch version not available

How a simplification package unexpectedly triggers a debate about the core of European digital rights

Turning point in European digital law: On November 19, 2025, the European Commission will officially present the Digital Omnibus, a package that aims to amend the GDPR, AI Act, Data Act and e-Privacy Directive. Civil society organizations such as noyb, EDRi and ICCL warn that leaked draft texts go beyond simplification and could undermine fundamental protection. The question is no longer whether change is coming, but what protection will remain standing.

What exactly is the Digital Omnibus?

The Digital Omnibus is a legislative package through which the European Commission wants to amend parts of multiple digital laws in one go. It concerns a broad range of regulations that have been adopted in recent years and are now perceived as fragmented and overlapping.

The main laws on the table are the GDPR (General Data Protection Regulation), the e-Privacy Directive (confidentiality of communications and cookies), the Data Act (access to and sharing of data), the AI Act (risk-based rules for AI), and related cybersecurity legislation such as NIS-2. (Digital Strategy EU)

On September 16, 2025, the Commission opened a so-called call for evidence to gather input on how these rules can be simplified. The official message: businesses and governments struggle with overlapping obligations, conflicting definitions and high administrative burdens. Executive Vice-President Henna Virkkunen articulated the goal as "less paperwork, fewer overlaps and less complex rules" - with a target of 25% reduction in administrative burdens for all businesses and 35% for SMEs. (Digital Strategy EU)

In itself, this is a recognizable problem. Anyone seriously working with GDPR, Data Act and AI Act immediately sees that definitions sometimes overlap and sometimes miss each other. Systematic harmonization can be useful. The key question, however, is where simplification ends and where substantive weakening begins.

The digital regulation stack

The European digital regulatory package is growing rapidly:

  • GDPR (2018): privacy and data protection
  • e-Privacy Directive: electronic communications and cookies
  • Digital Services Act (2024): platform responsibility
  • Digital Markets Act (2024): market power of tech giants
  • Data Act (2025): access to and sharing of industrial data
  • AI Act (2025-2027): risk-based AI regulation
  • NIS-2 (2024): cybersecurity for network and information systems
  • Cyber Resilience Act (2027): cybersecurity for products

Harmonizing this package is complex but necessary. The question is how.

The alarm bell: why civil society organizations are raising concerns

On November 11, 2025, noyb (None of Your Business), European Digital Rights (EDRi) and the Irish Council for Civil Liberties (ICCL) published a joint open letter titled "Digital omnibus brings deregulation, not simplification." (noyb.eu) The timing was not coincidental: the organizations had gained access to internal draft texts and were shocked by what they read.

Their core message: this is not a neutral technical cleanup, but a fundamental revision of key elements of European digital rights. Max Schrems, chairman of noyb and architect of multiple successful lawsuits against Big Tech, stated sharply: "The EU Commission is about to wreck the core principles of the GDPR." (noyb.eu)

The organizations point to three fundamental problems with the process:

1. Lack of transparency and democratic legitimacy

According to the organizations, the amendments were prepared in secret without prior public consultation on such far-reaching reforms. Where the original GDPR involved years of debate and extensive impact assessments, a fast-track procedure is now being used that is normally intended for technical adjustments. (noyb.eu)

2. Disproportion between form and content

The package is called "simplification" but the leaked texts show fundamental changes in definitions, legal bases and levels of protection. The Irish Council for Civil Liberties speaks of "deregulation disguised as administrative simplification."

3. Conflict with the EU Charter of Fundamental Rights

The organizations warn that some proposed amendments may conflict with Article 8 (protection of personal data) and Article 7 (respect for private life) of the Charter. If this is legally established, the amendments can be annulled by the Court of Justice - but only after years of uncertainty. (noyb.eu)

What's in the leaked draft texts?

Multiple organizations have analyzed an internal draft text. Noyb published a detailed 13-page analysis of the proposed GDPR amendments. (noyb.eu) This reveals concerning patterns that go beyond mere harmonization.

1. Redefinition of "personal data" creates massive exception

The current GDPR uses a broad definition: personal data is all data about an identifiable person. Even if a company cannot directly identify someone, but this is technically possible through combination with other data, the GDPR applies.

The proposed amendment introduces a criterion whereby data only counts as personal data if the company itself can identify the person with "reasonable means." This sounds technical but has far-reaching consequences.

Concrete example: An advertising company collects data about "user_7384952" including location history, browsing behavior and purchase patterns. Under the current GDPR, this is personal data because it concerns an identifiable person, even if the company doesn't know it's Jan Jansen from Utrecht. Under the new definition, the company could argue that it cannot identify the person with "reasonable means" and thus falls outside the GDPR.

The tracking paradox: Entire sectors such as online tracking, programmatic advertising and data brokers would largely fall outside GDPR protection. Ironically, the most invasive data processing is exempted because it works through pseudonyms rather than direct name identification.

2. Limitation of data subject rights to "data protection purposes"

The current GDPR provides clear rights: access, correction, deletion, data portability. These rights apply regardless of why someone exercises them. An employee who requests their personnel file because they suspect errors affecting their salary has that right.

The proposed amendment adds a criterion: these rights only apply for "data protection purposes." If the authority or court judges that someone is "abusing" the right for other purposes (e.g., in an employment dispute or as a journalist investigating), it can be refused.

Who this affects:

  • Journalists who use access requests to investigate business practices
  • Employees who request data in labor disputes about unpaid hours or discrimination
  • Researchers who want to analyze algorithms or data processes
  • Consumers who want to demonstrate price discrimination

Before and after: data subject rights

Current GDPR (Article 15)

"The data subject shall have the right to obtain from the controller confirmation as to whether or not personal data concerning him or her are being processed."

Proposed amendment (leaked draft)

"Access may be refused if the request is not directed at data protection purposes but at other interests such as employment disputes or commercial claims."

3. AI training gets virtually carte blanche via "legitimate interest"

One of the most controversial parts is the explicit expansion of legitimate interest as a legal basis for AI training. The current GDPR has six legal bases on which you can process personal data. Legitimate interest is one of them, but requires careful balancing between the company's interest and the person's rights.

The proposed amendment explicitly states that AI training, testing and validation can be performed based on legitimate interest, provided there are "safeguards" such as data minimization, transparency and an unconditional right to object. (Tech Policy Press)

This seems reasonable, but the practice is more problematic:

Data minimization in AI training: Large language models require massive amounts of diverse data. The concept of "minimization" is difficult to apply when the entire business case revolves around scale.

Transparency: Companies can claim to be transparent by stating in general terms that they use data for "AI improvement." The data subject still doesn't know which specific texts or photos of them were used.

Right to object: This is presented as a safeguard, but noyb points out that objections can almost always be dismissed in practice because the company can invoke "compelling legitimate grounds." For AI training, this is simple: "without this data we cannot train our model." (noyb.eu)

Who benefits from this amendment?

This amendment is not written with SMEs in mind. It is mainly companies like OpenAI, Google, Meta, Amazon and Microsoft that enormously benefit from broader possibilities to use European data for AI training. These companies have a combined market value of trillions and lobby intensively for more lenient rules. (Tech Policy Press)

4. Special categories of personal data lose protection

Article 9 GDPR provides enhanced protection for sensitive data: health, political opinions, sexual orientation, biometric data, etc. Processing of these is in principle prohibited unless there is an explicit exception.

The proposed amendment introduces a distinction between directly disclosed sensitive data and derived sensitive data. Only the first category still receives the strong protection of Article 9.

The paradox in practice:

Suppose: a person writes on social media "I'm expecting!" - this is directly disclosed and receives protection.

The same person searches for pregnancy yoga, buys prenatal vitamins online and adjusts their running schedule. AI can infer with high certainty that they're pregnant. But because this is derived information, it falls outside special protection.

The result: precisely the most sophisticated and invasive forms of data analysis - where AI derives sensitive characteristics from seemingly neutral behavioral data - escape protection. (noyb.eu)

5. Remote device access without consent

The proposals would enable remote access to personal data on smartphones and PCs under ten different legal bases - without explicit user consent. (noyb.eu)

This directly touches a fundamental element of digital autonomy: control over your own device. The current e-Privacy Directive requires consent for access to information on terminal equipment. The proposed amendment could circumvent this by reframing the processing under GDPR bases.

Practical impact: Apps and services can collect data from your device - think of sensor data, usage patterns, locally stored information - and invoke legitimate interest, contractual necessity or other bases without you having specifically consented to this.

The AI Act dimension: easing under time pressure

In addition to GDPR amendments, the Digital Omnibus also contains adjustments to the recently adopted AI Act. Reuters reported based on leaked documents that the Commission is considering easing or postponing certain obligations. (Reuters)

Grace period until August 2027

The most concrete proposed amendment is an extension of the enforcement period. Instead of national supervisors being able to immediately impose fines for non-compliance, there would be a grace period until August 2027. This means companies get two extra years before financial sanctions can actually be imposed.

For organizations already heavily investing in AI Act compliance, this feels ambivalent. On one hand, it provides more time and certainty. On the other hand, it rewards companies that waited with investing, while early movers already incurred costs.

Exceptions to registration requirement

The AI Act requires high-risk AI systems to be registered in a European database before being placed on the market. This creates transparency and enables supervisors to oversee the landscape.

The leaked drafts suggest the Commission is considering exempting certain systems from this registration requirement if they are only used for "limited" or purely procedural tasks. The problem: the definition of "limited" is vague and can be broadly interpreted. (Reuters)

The domino effects of postponement: When enforcement is delayed, organizations have less incentive to professionalize in time. This can lead to more incidents in the meantime. At the same time, uncertainty arises: do you invest fully in compliance now, or wait until 2027 to see how strict it really becomes?

Easing of transparency obligations for generative AI

Another element is possible easing of the obligation to label AI-generated content. The AI Act states that content generated by AI (text, image, video) must be recognizable to end users. This is to counter manipulation and disinformation.

The proposed amendment could postpone or limit this obligation to specific contexts. The reasoning: it is technically complex and can hinder innovation. The counter-argument: without labeling, citizens cannot distinguish what is real and what is AI-generated, which poses serious democratic risks.

The official Commission position: reducing administrative burdens

It's important to also understand the Commission's official reasoning. In their announcement of the call for evidence, the Commission emphasizes that the goal is to "facilitate doing business in Europe without jeopardizing our high standards for online fairness and safety." (Digital Strategy EU)

Executive Vice-President Henna Virkkunen states that businesses, particularly SMEs, struggle with:

  • Overlapping reporting obligations: The same information often needs to be reported in different formats to different authorities
  • Inconsistent definitions: What "AI system" means in the AI Act doesn't always align with how this is defined in sector-specific legislation
  • Complex compliance trajectories: The combination of GDPR, Data Act, AI Act, NIS-2 and sectoral legislation creates an administrative burden that is particularly heavy for smaller organizations

The Commission presents concrete figures: 25% reduction in administrative burdens for all businesses and 35% for SMEs. This fits within the broader Competitiveness Compass agenda with which the EU wants to strengthen its competitive position vis-à-vis the US and China.

PerspectiveCore ArgumentFocus
European CommissionSimplification necessary for competitiveness and SMEsAdministrative burdens, business perspective
Privacy organizationsDeregulation disguised as simplification, Big Tech lobbyFundamental rights, citizen perspective
Large tech companiesRules hinder innovation and EU falls behind US/ChinaCompetition, AI development
Supervisory authoritiesHarmonization useful but protection level must be safeguardedEnforceability, effectiveness

Lobby pressure: who's influencing the course?

It would be naive to think the Digital Omnibus is developed in a vacuum. There is intense lobbying activity from multiple sides.

Big Tech coalition: Companies like Google, Meta, Microsoft, Amazon and OpenAI have substantial lobbying power in Brussels. Their joint argument: Europe is falling behind in the AI race and overly strict rules exacerbate this. They point to the US where AI companies face fewer restrictions and to China which invests massively. (Tech Policy Press)

European industry: Traditional European companies - automotive, manufacturing, finance - also lobby for more lenient rules. Their argument differs: we want to innovate but are hindered by regulatory burden. SMEs need our data to develop AI tools.

Privacy and civil rights organizations: On the other side, noyb, EDRi, ICCL, Access Now and dozens of other organizations lobby for maintaining protection. Their argument: fundamental rights are non-negotiable and must not be sacrificed for economic goals.

Member states: The positions of individual member states vary. Some countries (like France and Germany) advocate for balance between innovation and protection. Others (like Ireland, where many Big Tech headquarters are located) lean toward more lenient rules. Some (like Poland and Scandinavian countries) emphasize the importance of fundamental rights.

Follow the money: Transparency figures show that Big Tech spends tens of millions of euros annually on lobbying activities in Brussels. This includes not only direct lobbying but also funding of think tanks, research reports and "coalitions for innovation" that carry the message further. The question is whether this influences or distorts democratic decision-making.

What this means for organizations already investing in governance

For organizations seriously working on GDPR compliance, AI governance and responsible data practices, the Digital Omnibus feels ambivalent. Let's go through the implications concretely.

Scenario 1: The amendments go through as leaked

If the leaked drafts are largely adopted, the playing field changes fundamentally:

For AI providers and GPAI players: Your competitors who until now were conservative with personal data for AI training (e.g., through strong pseudonymization or synthetic data) suddenly see the bar lowered. Companies that already did massive web scraping "at risk" are proven right in hindsight. The question becomes: do you adapt your strategy or maintain a higher internal standard?

For deployers and users: You've invested in DPIAs, vendor assessments and contractual safeguards. If your suppliers now get broader legal space to use data, you need to revise your contracts. The question: do you continue to require that your data not be used for AI training, or do you accept this as the new norm?

For DPOs and lawyers: Your field of work shifts. Tasks that are now clear (e.g., testing whether legitimate interest is valid for AI training) become more complex and gray. You need to take an internal position: do we interpret the new rules minimally or maximally broadly? How do our values relate to the legal minimum?

Scenario 2: The amendments are weakened after public debate

If public debate and pressure from the European Parliament lead to substantial adjustments:

Reputational advantage for early movers: Organizations that invested in strong governance despite uncertainty can communicate this as a competitive advantage. "We already applied strict standards before it was required" is a powerful message to customers and stakeholders.

Compliance lead: If the final rules remain stricter than the leaked drafts suggest, organizations that continued investing have a lead over competitors who waited.

Internal support: Investing in governance despite uncertainty shows the organization places principles above short-term opportunism. This strengthens internal culture and ethical values.

Scenario 3: Fragmentation - different member states interpret differently

A real risk is that the Omnibus aims for uniformity but actually leads to more fragmentation as member states interpret the new space differently:

Supervisor roulette: If, for example, the French CNIL strictly interprets that AI training only with consent, but the Irish DPC is lenient with legitimate interest, regulatory arbitrage arises. Companies then choose their location based on where interpretation is most lenient.

Increased compliance costs: Instead of simplification, this leads to higher costs: you need to follow multiple interpretations depending on where you operate. For multinationals, this becomes a nightmare.

The balance between innovation and protection: is there a third way?

The debate about the Digital Omnibus is often framed as a zero-sum game: either we choose innovation and economic growth (more lenient rules), or we choose fundamental rights and protection (strict rules). This framing is too simple and possibly destructive.

There are examples of how harmonization and simplification can work without lowering protection:

Technical harmonization: Uniform definitions of "AI system," "high-risk," "personal data" across all laws would help enormously without changing the level of protection. If the AI Act, Data Act and GDPR mean exactly the same thing with the same term, compliance becomes simpler.

Procedural efficiency: One joint impact assessment instead of three separate ones (DPIA, FRIA, DSAIR) can reduce administrative burdens without lowering substantive protection. The only question: do they cover all aspects and does it remain verifiable?

Harmonized reporting: If incident reporting under NIS-2, AI Act and GDPR can go to one central point with shared formats, that saves enormously on overhead. The substantive obligation remains the same.

Clearer guidance: Many compliance costs don't come from the law itself but from uncertainty about interpretation. Better, binding guidance from the EDPB and AI Office can help without weakening the law.

The Scandinavian lesson: Some Scandinavian countries have demonstrated that strict privacy legislation and thriving tech ecosystems can coexist. The key: clarity, predictability and pragmatic guidance. Companies can innovate when they know where the boundaries lie and that those boundaries remain stable. It's the uncertainty and inconsistency that does the most damage.

The question is whether the Commission chooses this third way, or whether political pressure leads to actual weakening of protection.

Conclusion: where this leaves us

The Digital Omnibus is more than a technical legislative package. It's a test of what Europe stands for in the digital era. Do we want to be a region where fundamental rights are guiding and economy aligns with that? Or will we accept that economic pressure and Big Tech lobbying pull the protection level downward?

The leaked drafts show this is not an academic debate. The proposed amendments touch the core of the GDPR and AI Act - the legislation that distinguished Europe globally in its principled position on privacy and responsible technology.

For lawyers, DPOs and AI governance leads, this means:

Stay alert to developments: The November 19 presentation is crucial, but a long political process follows. This can go any direction.

Take an internal position: Don't wait until the law is final to determine what level of protection and ethics your organization pursues. You can have that conversation now.

Prepare scenarios: Model different outcomes and work out what each scenario means for your governance, contracts and practice.

Consider raising your voice: This is a democratic process. Organizations, citizens and professionals have the right and responsibility to contribute their perspective.

The coming months will be decisive for the future of digital rights in Europe. The question is not whether change is coming - it's coming anyway. The question is whether we as a society and as professionals actively help determine what that change looks like, or whether we watch while others set the course.

The Digital Omnibus is simplification if it goes well, and silent erosion if it goes wrong. It's up to all of us to ensure the former happens and not the latter.


Sources and further reading