LinkedIn AI controversy: why the Dutch DPA is raising alarm about your data

5 min read
Dutch version not available

On September 24, 2025, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens - AP) raised serious concerns about LinkedIn's plans to use user data for training artificial intelligence systems. This isn't just a routine warning – the AP speaks of "major concerns" and actively urges users to adjust their settings. But what exactly is happening, and why is this situation so problematic from a privacy perspective?

The core of the controversy lies in LinkedIn's approach to automatically use all user data – including profile information dating back to 2003 – for AI training starting November 3, 2025, unless users explicitly object. This opt-out approach, combined with relying on "legitimate interest" under GDPR, is sparking legal and ethical debate.

LinkedIn's AI plan: what's actually happening?

LinkedIn has announced that starting November 3, 2025, it will use user data to train AI models. This encompasses a broad range of information: profile data such as name, photo, current position, work experience, education, location, and skills. Public content like posts, articles, comments, and polls will also be used. Private messages remain excluded according to LinkedIn.

What makes the situation particularly sensitive is the timeframe. LinkedIn wants to use data going back to 2003 – the year the platform was founded. This means decades of professional information that users have shared will suddenly be deployed for a purpose it wasn't originally intended for.

The default setting is "on," meaning all LinkedIn users automatically participate unless they actively disable the setting. This opt-out approach forms a significant part of the criticism from regulators.

Why is the Dutch DPA raising alarm?

Monique Verdier, Vice-Chair of the Dutch DPA, articulated the concerns clearly: "LinkedIn wants to use data going back to 2003, while people shared that information back then without anticipating it would be used for AI training." This touches the core of informed consent – users originally agreed to share their professional information for networking and career purposes, not for feeding AI systems.

The DPA points to a fundamental loss of control once data enters AI models. Unlike traditional databases, it's practically impossible to remove specific information from trained models. This makes potential damage or misuse difficult to reverse.

Particularly concerning are the special categories of personal data that can be inferred from LinkedIn profiles. While LinkedIn claims not to use sensitive data, AI systems can derive sensitive characteristics about health, ethnicity, religion, or political preference from seemingly neutral information like work history, network, and posts.

The legal puzzle of "legitimate interest"

LinkedIn justifies the data processing under Article 6(1)(f) of GDPR – the so-called "legitimate interest." This legal basis requires a careful balancing test: LinkedIn's interest in AI development must be weighed against the privacy impact on users.

However, this justification is contested. Legal experts doubt whether LinkedIn can demonstrate that AI training is necessary for their business operations, and whether the interest outweighs the privacy rights of millions of users. The scale of processing – decades of data from all European users – makes the proportionality test particularly relevant.

The situation is complicated by jurisdictional issues. LinkedIn falls under the supervision of the Irish Data Protection Commission (DPC) because the company has its European headquarters in Dublin. The Dutch DPA can warn and handle complaints, but formal enforcement lies with the DPC. This fragmentation of oversight represents a structural problem within GDPR enforcement.

The broader context: big tech and AI hunger

The LinkedIn situation doesn't stand alone. Meta previously announced similar plans for Facebook and Instagram data, leading to comparable objections from European regulators. This trend shows the growing "data hunger" of tech companies for training increasingly sophisticated AI systems.

The timing is significant. As the EU AI Act is being implemented in phases and foundation models fall under stricter regulation, companies are looking for ways to continue their AI development within legal frameworks. Using existing user data seems like a logical solution but clashes with privacy principles based on purpose limitation and transparency.

Practical consequences and protection

For LinkedIn users who object to AI training with their data, action is required before November 3, 2025. The setting can be adjusted via the privacy menu under "Data for improving generative AI features." This must be done for each account – there's no bulk option for business accounts.

However, the opt-out isn't foolproof. LinkedIn retains the right to change terms, and it's unclear how long the opt-out remains valid. Moreover, it only protects against future use – data already in AI models cannot be reversed.

For organizations using LinkedIn for professional purposes, new dilemmas arise. How do you balance the benefits of business networking against the privacy risks of AI training? Some companies are considering tightening their social media policies or limiting information employees share on professional platforms.

The future of consent in the AI era

The LinkedIn controversy illustrates a broader problem: how should consent work in a world where data is used for increasingly new purposes? Current GDPR principles of purpose limitation and transparency seem inadequate for the reality of AI development, where the application possibilities of data are often unknown at collection time.

This raises fundamental questions about the future of the platform economy and privacy. Should companies re-ask users for consent for every new application of their data? Or is a broader, more flexible form of consent needed that provides room for innovation without making users powerless?

The outcome of the LinkedIn case – whether the DPC ultimately accepts the "legitimate interest" justification – will set precedent for other tech companies with similar plans. It forms a test case for the balance between AI innovation and privacy protection in Europe.

For now, the Dutch DPA's message remains clear: users who want to maintain control over their data must act proactively. Because in the world of AI training, more than ever: silence doesn't necessarily mean consent – but it does mean losing control.

Need help with privacy and AI? 💡

Do you have questions about the implications of AI training for your organization? Or want to know how to protect yourself against unwanted data usage? Contact us for a no-obligation consultation.