Responsible AI Platform

DPIA for AI systems: when required and how to conduct one (2026 guide)

··16 min read

A data protection officer at a health insurer receives a question from the innovation department: "We want to deploy an AI model that predicts claim behaviour. Do we need to do anything about that?" The answer is yes, and what you need to do is a DPIA. But not just any DPIA. One specifically designed around the risks that AI systems bring.

The Data Protection Impact Assessment (DPIA) is not a new instrument. Article 35 of the GDPR has required it since 2018. But the rise of AI systems processing personal data at ever-increasing scale makes the DPIA more relevant than ever. And with the EU AI Act in force since 2 August 2025, a new landscape emerges where the DPIA plays its own role alongside the FRIA (Fundamental Rights Impact Assessment).

In this article, we cover when a DPIA is required, how to conduct one for AI, what supervisory authorities expect, and how to combine the DPIA with AI Act obligations.

When Is a DPIA Required for AI?

Article 35(1) GDPR states that a DPIA is required when processing is "likely to result in a high risk to the rights and freedoms of natural persons." With AI systems, that is almost always the case, but let us be precise.

The three automatic triggers from the GDPR

Article 35(3) names three situations where a DPIA is always required:

a) Automated decision-making with legal effects. Think of an AI system that automatically determines whether someone gets a loan, can take out insurance, or qualifies for a benefit. This is the most common trigger for AI systems. As soon as the system makes or significantly influences decisions with legal or similarly significant effects on individuals, a DPIA is required.

b) Large-scale processing of special categories. When your AI system processes health data, biometric data, criminal records or other special category personal data at scale. An AI model analysing medical images or applying speech recognition falls directly under this.

c) Systematic and large-scale monitoring of publicly accessible areas. Camera systems with facial recognition, crowd analysis with AI, or smart sensors in public spaces. The combination of AI and surveillance is a classic DPIA trigger.

National supervisory authority lists

Under Article 35(4), national data protection authorities have published their own lists of processing operations requiring a DPIA. While specifics vary by country, common AI-relevant triggers include:

  • Covert observation of data subjects
  • Profiling based on personal data, especially combined with automated decision-making
  • Biometric data for identification purposes
  • Innovative use of existing or new technology (AI falls under this by definition)
  • Combining or matching datasets in ways data subjects cannot reasonably expect

In practice, most AI systems processing personal data meet at least two of these criteria. The general rule: two or more criteria? A DPIA is required.

When is a DPIA not needed?

There are situations where a DPIA for an AI system is not required:

  • The AI system processes no personal data (for example, an AI optimising manufacturing processes based on machine data)
  • The processing is on the authority's exemption list
  • A comparable DPIA has already been conducted for a similar processing operation and the risks are not materially different

But note: even when a DPIA is not formally required, it can be wise to conduct one anyway. Supervisory authorities view it as a sign of good data governance.

What Makes a DPIA for AI Different?

A DPIA for a traditional information system (a CRM, an HR database) is relatively straightforward. You know what data goes in, what happens to it, and what comes out. With AI systems, that is fundamentally different.

Model opacity

With many AI systems, particularly deep learning models, it is difficult or impossible to explain precisely how the model arrives at a particular output. This directly affects the transparency requirement under the GDPR (Article 5(1)(a)) and the right to explanation for automated decision-making (Article 22(3)). Your DPIA must describe how you handle this opacity.

Training data as a risk source

An AI model is only as good as the data it was trained on. Bias in training data leads to discriminatory outcomes. Your DPIA must assess the provenance, quality and representativeness of training data. Questions you must answer:

  • Is the training data representative of the population the model will be applied to?
  • Does the training data contain historical biases the model might reproduce?
  • Was the training data lawfully obtained and is there a valid legal basis for its use?
  • How do you handle personal data in the training set after training?

Model drift and continuous change

Unlike traditional systems, AI models can change over time. A model that is retrained (fine-tuning) or works with real-time data (online learning) can gradually deviate from its original behaviour. This means your DPIA is not a one-off document but a living instrument requiring periodic review.

Emergent behaviour

Large language models and other generative AI systems can exhibit unexpected behaviour not directly attributable to training data or system configuration. Your DPIA must describe how you monitor for unforeseen behaviour and what measures you take when the model acts unexpectedly.

Conducting the DPIA Step by Step

The GDPR prescribes in Article 35(7) four minimum requirements for the content of a DPIA. Below we work through each step with specific considerations for AI systems.

Step 1: Systematically describe the processing

Start with a complete description of the AI system and how it processes personal data.

First, describe the AI system itself: what type of model is it (rule-based, machine learning, deep learning, generative), what is its function and which decisions does it support or make? Who is the provider and who is the deployer? What input does the system receive and what output does it deliver?

Then map out the data flows. Which personal data enters the system as direct input, training data, or context data? How is that data processed within the model? Where is it stored, on-premise, in the cloud, or at the provider? And is data shared with third parties such as API providers or cloud services?

Next, identify the data subjects: which categories of individuals are affected, how many individuals are potentially impacted, and whether vulnerable groups are involved such as minors, patients, or employees.

Finally, establish the legal basis. Which ground from Article 6 GDPR does the processing rely on? For special category data: which exception from Article 9(2) applies? And for automated decision-making: does an exception under Article 22(2) apply?

Step 2: Assess necessity and proportionality

This is the step many organisations rush through. You must demonstrate that using AI technology is necessary and proportionate to the purpose you want to achieve.

Start with purpose limitation: is the purpose of the AI processing specific, explicit and legitimate? Could the same purpose be achieved without AI or with less intrusive means? If a simple decision tree yields the same result, a complex neural network is hard to justify.

Then consider data minimisation. Does the AI system process only the personal data strictly necessary? Many AI models are trained on more data than needed, simply because that data is available. That is not a valid justification.

Assess storage limitation as well: how long is personal data retained and is training data still traceable to individuals after training? And finally accuracy: how do you ensure the AI system's output is accurate and what error rate is acceptable given the impact on data subjects?

Step 3: Identify and assess risks

This is where the DPIA becomes AI-specific. Beyond standard privacy risks, for AI systems you must examine five additional risk categories.

Discrimination and bias is the most discussed risk. Can the model systematically disadvantage certain groups based on protected characteristics? How do you test for bias before and after deployment, and which fairness metrics do you apply? An AI system that scores job applicants and structurally ranks women lower is not only unethical but also violates the GDPR and the AI Act.

With unlawful profiling, you investigate whether the system creates profiles of individuals based on their behaviour, location or other characteristics. Are those profiles accurate and used for purposes data subjects can reasonably expect?

Loss of autonomy concerns the extent to which the AI system determines what people see, which choices they can make, or how they are assessed. Is meaningful human intervention possible, or does the system in practice function as a black box dictating decisions?

Also assess security risks: how vulnerable is the model to adversarial attacks, data poisoning or model extraction? And what happens if the model is compromised?

Finally, transparency risks. Do data subjects know they are interacting with an AI system? And can they understand how it arrives at a decision? With complex models, full explainability is often not feasible, but you must describe what steps you take to be as transparent as possible.

Step 4: Describe the measures

For each identified risk, describe the measures you take to mitigate it.

For AI systems, the most common measures include: periodic bias audits to test for fairness and discrimination, explainability tools such as SHAP or LIME to explain model outcomes, and a human-in-the-loop setup where a human reviews high-impact decisions.

Additionally, continuous monitoring is essential: you need to track model drift, performance degradation and unexpected behaviour. Establish solid data governance with quality controls on training data and documentation of data provenance. Restrict via access control who can call the model and which data it can access. And ensure you have an incident procedure describing what happens when the AI system generates erroneous or harmful output.

The Role of the DPO for AI Systems

The Data Protection Officer (DPO) has a statutory advisory role in the DPIA (Article 35(2) GDPR). For AI systems, this role is especially important.

The DPO must be involved at an early stage, not just when the system has already been procured or built. Consult the DPO during selection of an AI vendor (what data goes to the provider?), during design of data flows (which personal data is truly needed?), during the testing phase (are test results acceptable from a privacy perspective?), at the decision to put the system into production, and at every significant change to the model or data.

The DPO's advice and how it was followed up must be documented in the DPIA. Supervisory authorities check this.

Prior Consultation: When to Contact the Authority

A frequently forgotten obligation: Article 36 GDPR prescribes that you must consult the supervisory authority when the DPIA indicates that processing would result in a high risk and you cannot sufficiently mitigate that risk.

With AI systems, this occurs more often than with traditional systems. Model opacity, the potential for bias, and the scale of processing make it harder to reduce all risks to an acceptable level.

The procedure works as follows:

  1. You submit the DPIA to the supervisory authority, together with a description of measures already taken
  2. The authority has 8 weeks to respond (extendable by 6 weeks)
  3. The authority can require additional measures or prohibit the processing

In practice, we recommend informing the authority early when you intend to deploy an AI system for automated decision-making with significant impact on individuals.

DPIA and the EU AI Act: Dual Obligations

Since the EU AI Act came into force, organisations may face both a DPIA obligation (GDPR) and a FRIA obligation (AI Act Article 27). These are two different assessments with different focus areas.

The DPIA finds its legal basis in Article 35 GDPR and focuses on the protection of personal data. The supervisory authority is the national data protection authority, which can impose fines up to 4% of global turnover. Any data controller carrying out high-risk processing must conduct a DPIA.

The FRIA, by contrast, is based on Article 27 of the AI Act and looks beyond privacy alone: at all fundamental rights that may be affected by an AI system. Supervision will fall to the AI supervisory authority (still to be designated), with fines up to 3% of global turnover (or 15 million euros). The FRIA obligation applies only to specific deployers of high-risk AI systems.

For an in-depth comparison, see our DPIA vs FRIA guide.

How to Combine DPIA and FRIA

Article 27(4) of the AI Act explicitly allows the FRIA to be combined with the DPIA. In practice, this means:

  1. Start with the DPIA (it is broader in scope for data protection)
  2. Add FRIA elements as separate sections (non-discrimination, human dignity, access to justice, etc.)
  3. Document per right both the privacy impact and the broader fundamental rights impact
  4. Use a combined template covering both assessments

Conformity Assessment: The Third Layer

Beyond DPIA and FRIA, the AI Act also introduces the conformity assessment (Article 43) for providers of high-risk AI systems. Where the DPIA examines data processing risks and the FRIA examines fundamental rights risks, the conformity assessment focuses on technical and organisational requirements such as Annex IV documentation and the quality management system. As a provider of a high-risk AI system, you potentially need all three. As a deployer, typically the DPIA and FRIA.

Practical Examples

Example 1: AI chatbot for customer service

A telecom company wants to deploy an AI chatbot that can access customer data to answer questions.

DPIA trigger: Large-scale processing of personal data + innovative technology.

Specific risks:

  • The chatbot could accidentally show personal data of customer A to customer B (data leakage)
  • The model may have been trained on customer conversations without explicit consent
  • Sensitive information (payment arrears, complaint history) could unintentionally appear in responses

Measures: Strict access control per session, output filtering for personal data, no training on production data without pseudonymisation, clear notification that it is an AI system.

Example 2: HR screening with AI

A recruitment agency wants to use AI to automatically screen and rank CVs.

DPIA trigger: Automated decision-making with significant effects + profiling.

Specific risks:

  • Discrimination based on gender, age, ethnicity or postcode (proxy discrimination)
  • Candidates rejected without human review
  • Training data reflects historical hiring bias

Measures: Mandatory human review of all rejections, bias audit before deployment, no use of protected characteristics as features, transparency to candidates about AI use, regular fairness monitoring.

Example 3: Predictive analytics in healthcare

A health insurer wants to use AI to predict the risk of chronic conditions and offer preventive programmes.

DPIA trigger: Special category data (health) + automated decision-making + large-scale processing.

Specific risks:

  • Health data is the most sensitive category of personal data
  • Risk profiles could lead to exclusion from insurance coverage
  • Predictions can be stigmatising
  • Inaccurate predictions could lead to unnecessary medical interventions

Measures: Explicit consent or legal basis, strict pseudonymisation, no use for acceptance policy (prevention only), validation by medical professionals, opt-out possibility for insured persons.

DPIA as a Living Document

A common mistake is to treat the DPIA as a one-off approval document. For AI systems, that is a recipe for problems. The GDPR prescribes in Article 35(11) that you must review the DPIA when risks change.

For AI systems, specific moments requiring review include:

  • The model is retrained with new data
  • The input data changes significantly (different sources, different population)
  • The system is deployed for a new purpose or new target group
  • There have been incidents with the AI system
  • Laws and regulations change (such as the phased entry into force of the AI Act)
  • The technology changes fundamentally (upgrade to a different model type)

We recommend reviewing the DPIA at least annually, and more frequently when the AI system is actively being retrained.

Free DPIA Template for AI Systems

We have developed a specific DPIA template for AI systems covering all the elements discussed above. The template includes a systematic description of the AI system and data flows, an AI-specific risk assessment covering bias, transparency, security and emergent behaviour, a necessity and proportionality test adapted for AI, a measures overview with AI-specific mitigations, a section for DPO advice and documentation, and a review schedule for continuous compliance.

Download the template for free via our templates page.

Conclusion

The DPIA for AI systems is not a formality. It is the instrument through which you demonstrate that you take people's privacy seriously in an era where AI systems increasingly intervene in daily life. With the EU AI Act now in force, the DPIA also becomes part of a broader compliance landscape that includes the FRIA and conformity assessment.

Start early, involve your DPO, be honest about the risks, and treat the DPIA as a living document. That is not only what the law prescribes, it is also what protects your organisation.

Frequently Asked Questions about DPIA for AI

More on the relationship between DPIA and FRIA? Read our complete DPIA vs FRIA comparison. Want to get started with the fundamental rights assessment? Use our free FRIA Generator.

On LearnWize:EU AI Act ComplianceTry it free

Templates, checklists and interactive exercises to get your compliance in order.

Take the free AI challenge