Current status March 2026: The interplay between GDPR and the EU AI Act is now a daily enforcement reality. The European Data Protection Board has published multiple opinions on how the two frameworks interact. Prohibited AI practices (including certain biometric and emotion recognition systems) have been enforceable since February 2025. GPAI providers must comply with transparency and data governance obligations since August 2025. High-risk AI deployers face full compliance requirements from August 2026.
When a Dutch court ruled in early 2024 that the Dutch Tax Authority's SyRI risk-profiling system violated fundamental rights because it processed personal data in ways that lacked sufficient transparency and proportionality, it crystallized a tension that organizations across Europe are now navigating: AI systems that are genuinely powerful tend to be data-intensive, complex and opaque, while the GDPR and the EU AI Act together demand transparency, minimization and human accountability. The two goals are not irreconcilable, but making them work together requires deliberate design choices, not afterthoughts.
Since August 1, 2024, the EU AI Act has been in force. The GDPR has been in force since May 2018. Many organizations have spent years building GDPR compliance programs, and the AI Act adds a new layer: it does not replace the GDPR but sits on top of it, creating complementary obligations that address different aspects of the same underlying challenge. Understanding the relationship between them is the starting point for any serious AI privacy strategy in 2025.
Why AI creates privacy problems that the GDPR alone cannot fully address
The GDPR was designed for a world of databases and structured data processing. Its core principles, lawfulness, transparency, purpose limitation, data minimization, accuracy, storage limitation, and integrity, were crafted with a mental model of organizations that collect data for specific identified purposes and process it in predictable ways.
AI systems break several of those assumptions. A machine learning model trained on millions of records does not process data in the sequential, rule-based way that GDPR compliance programs typically assume. The model "learns" statistical relationships from training data, many of which are not explicitly programmed and often cannot be fully explained even by the developers who built it. Once deployed, the model applies those learned patterns to new inputs in ways that can produce outcomes that are discriminatory, inaccurate or rights-infringing, even when the individual data fields feeding the model appear benign.
The proxy discrimination problem
A credit-scoring model trained on historical loan data may learn that people from certain postcodes, or with certain names, or who shop at certain retailers, are poorer credit risks. None of those variables is "race" or "ethnicity," but they correlate strongly with those protected characteristics. The GDPR's prohibition on processing special category data is technically not violated, but the discriminatory outcome is real. The AI Act addresses this through Article 10's data governance requirements, which require high-risk AI systems to use training data that is relevant, representative and free from errors, with explicit attention to possible biases.
A second structural tension is between the AI system's appetite for data and the GDPR's data minimization principle. Deep learning systems, in particular, often improve significantly with more data, and developers have strong incentives to collect broadly. The GDPR says you may only collect what is strictly necessary for the specified purpose. Navigating this tension requires genuine discipline: defining the purpose of the AI system precisely before training begins, rather than collecting broadly and deciding later what to do with the data.
The regulatory architecture: GDPR and AI Act side by side
The GDPR establishes the foundational requirements for personal data processing. It covers lawful basis (consent, contract, legitimate interest, legal obligation, vital interest, public task), data subject rights (access, rectification, erasure, portability, objection, restriction), controller and processor responsibilities, and the DPIA requirement for high-risk processing.
The AI Act adds a different layer. It classifies AI systems by risk level, from unacceptable (prohibited) through high-risk to limited and minimal risk, and imposes specific obligations on providers and deployers depending on that classification. For high-risk AI systems, which include those used in credit decisions, employment, education, healthcare, law enforcement, migration, and essential public services, Article 9 requires a risk management system, Article 10 governs data governance, Article 11 requires technical documentation, Article 13 mandates transparency toward users, Article 14 requires human oversight, and Article 15 covers robustness, accuracy and cybersecurity.
The relationship between a DPIA under Article 35 GDPR and a Fundamental Rights Impact Assessment (FRIA) under Article 27 AI Act is worth particular attention. Both are required for high-risk AI that processes personal data, and both cover overlapping ground. The DPIA focuses on privacy risks; the FRIA has a broader scope including non-discrimination, access to justice and social security. Running them in parallel, or as an integrated process, avoids duplicated work while ensuring both are complete. Neither can substitute for the other: the DPIA is required under GDPR regardless of AI Act obligations, and the FRIA is required under the AI Act for deployers that are public authorities or private organizations providing public services.
The specific challenge of automated decision-making
Article 22 of the GDPR gives individuals the right not to be subject to solely automated decisions that produce legal or similarly significant effects, unless specific conditions apply (explicit consent, contractual necessity, or explicit legal authorization). This right was designed for credit decisions, insurance pricing, and similar high-stakes automated processing, and it interacts directly with how AI is used in exactly those contexts.
The AI Act reinforces this requirement through Article 14's human oversight mandate: high-risk AI systems must be designed and deployed so that natural persons can effectively oversee their functioning, understand their outputs, and intervene when necessary. The phrase "effectively oversee" is meaningful, not a checkbox. A human who rubber-stamps AI outputs without genuinely understanding what they mean, or who lacks the authority to override the system, is not providing meaningful oversight in the sense Article 14 requires.
For organizations deploying AI in decision-making workflows, this means audit trail logging, explainability capabilities (at minimum, the ability to explain in plain language why a specific output was produced for a specific individual), and defined escalation paths when the AI output conflicts with human judgment. The review cannot be a formality.
Key compliance obligations for 2025 and beyond
The first and most fundamental step is conducting a data protection impact assessment for any AI system that processes personal data in a way that is "likely to result in a high risk" to individuals. The GDPR lists several criteria, including large-scale processing, systematic monitoring, and processing of special category data, and most significant AI applications will meet at least one of them. The DPIA must be documented, reviewed, and updated when the system changes materially.
Alongside the DPIA, deployers of high-risk AI as defined in Annex III of the AI Act must complete a FRIA before deploying the system. The FRIA generator can help structure this process, but the assessment must be genuine: it should identify the specific fundamental rights at risk, assess the likelihood and severity of harm, and document the mitigation measures implemented. A superficial FRIA that identifies no risks where risks clearly exist will not withstand regulatory scrutiny.
Transparency obligations run through both frameworks. Under the GDPR, individuals must be informed when automated processing is used for decisions about them. Under Article 13 of the AI Act, high-risk AI systems must include information that allows users to understand the system's capabilities and limitations, how it was developed, what data it was trained on, and how to interpret its outputs. Under Article 52, AI systems that interact directly with individuals, such as chatbots, must disclose that they are AI systems, not humans.
Integrating DPIA and FRIA assessments
Organizations subject to both the GDPR and AI Act should consider running their Data Protection Impact Assessment (DPIA) and Fundamental Rights Impact Assessment (FRIA) as an integrated process. The DPIA focuses on privacy risks while the FRIA covers broader fundamental rights including non-discrimination and access to justice. Running them together avoids duplicated effort, but neither can substitute for the other as they serve different regulatory purposes.
Data governance is the area where many organizations have the largest gap. The AI Act's Article 10 requirements, which apply to training, validation and testing data for high-risk systems, go beyond the GDPR's data quality principle. They require that training data be assessed for potential biases, that data collection practices be documented, and that the relevance of training data to the operational use case be justified. Organizations that have not historically documented why specific datasets were chosen for model training will need to build that discipline into their model development process.
Learn the EU AI Act by doing
No slides. No boring e-learning. Try an interactive module.
Try it yourself
3 interactive activities. Earn XP. See why this works better than reading slides.
Special categories, biometrics and sensitive attributes
The use of special category data (health, ethnicity, religion, political opinion, sexual orientation, biometric data) in AI systems is one of the most contested areas of AI privacy law. The GDPR's Article 9 prohibits processing such data without an explicit legal basis. The AI Act adds specific restrictions: emotion recognition in workplaces and educational settings is prohibited outright under Article 5. Biometric identification systems used by law enforcement in public spaces are prohibited with very narrow exceptions. Systems that infer sensitive attributes from non-sensitive proxy data may violate the spirit of the GDPR even when they do not process special category data directly.
For healthcare AI, this means that any model using patient data, even de-identified data, requires a rigorous legal basis assessment, a DPIA, careful evaluation of re-identification risks (pseudonymized data can often be re-identified when combined with other datasets), and a privacy-by-design approach that minimizes data collection to what is demonstrably necessary for the clinical objective.
For HR AI, including resume screening, performance monitoring and workforce planning tools, the combination of GDPR and AI Act obligations means that employees and candidates must be informed of the AI's use, must have meaningful human review of adverse decisions, and must have a way to contest those decisions. Organizations that deploy automated resume screening without any of these safeguards are exposed on both GDPR and AI Act grounds.
Technical and organizational measures
Compliance is not a document exercise. It requires technical measures that are built into the AI system itself and organizational measures that govern how the system is used.
On the technical side, this means encryption of training data and model parameters, access controls that limit who can query the model and with what data, robust logging that records what inputs were provided and what outputs were produced (essential for audit and for responding to data subject access requests), and monitoring systems that detect when model performance degrades or produces anomalous outputs.
On the organizational side, it means assigning clear ownership for each AI system (who is responsible for monitoring, for responding to incidents, for updating the model when it drifts), establishing a process for data subject rights requests that explicitly covers AI-processed data (many organizations handle GDPR rights requests competently for their CRM data but have no process for requests involving AI-based decisions), and building regular review cycles where AI systems are evaluated against the criteria of their original DPIA and FRIA.
Security deserves specific attention. AI systems are not immune to adversarial attacks. A model can be manipulated through "poisoning" attacks that corrupt training data, through "evasion" attacks that cause the model to misclassify carefully crafted inputs, or through model extraction attacks that allow an attacker to reconstruct the model from queries. Article 15 of the AI Act and Article 32 of the GDPR both require appropriate technical measures to ensure security. For AI systems used in critical decisions, this includes adversarial testing as part of the validation process.
Staying current in a fast-moving environment
The AI Act is a framework regulation, meaning it will be supplemented by delegated acts, implementing acts and standards that add technical detail. Annex III, the list of high-risk applications, can be extended by the European Commission through delegated acts when new categories of AI create comparable risks. Organizations need to monitor these developments, because a system that is not currently in a high-risk category may become subject to the full compliance regime without a change to the primary legislation.
The AI Office, established under Article 64, is publishing guidelines and technical specifications on an ongoing basis. The European Data Protection Board is also active, producing opinions on the interaction between GDPR and AI Act obligations. Monitoring these outputs is not optional for compliance teams: it is the difference between understanding the current state of the law and relying on interpretations that may have been superseded.
Building AI privacy compliance into your organization requires treating it as a continuous process rather than a project with a completion date. Systems change, data changes, the regulatory environment changes, and the ways in which AI can harm individuals are still being discovered. Organizations that embed GDPR and AI Act compliance into their AI development and procurement lifecycle, rather than bolting it on afterward, will navigate this complexity with significantly less friction than those that treat compliance as an audit event.
The right starting point is the same question the regulation asks: what could this system do to the people it affects, and are we prepared to justify that to them and to the regulator?