AI Act incident reporting: consultation open NOW

15 min read
Dutch version not available

How the new rules for reporting serious AI incidents fundamentally change your incident response

Final days: Until November 7, 2025, the consultation on the draft guidance and reporting template for reporting serious AI incidents is open. This is your opportunity to participate in shaping how the EU-wide reporting chain takes form.

Why this consultation extends beyond mere reporting forms

On October 31, 2025, the European Commission published two crucial documents that make the practical operation of the AI Act tangible. The first is a draft guidance that explains when an incident is "serious" and what steps providers and deployers must take. The second is a standardized reporting template for notifications to national market surveillance authorities.

These reporting obligations, based on Article 73 of the AI Act, will actually apply from August 2026. They mark a fundamental shift in how organizations handle AI incidents. While cybersecurity incidents have had reporting obligations for years under NIS2 and data breaches under the GDPR, the AI Act now introduces a specific regime for AI-related harm to health, safety, fundamental rights and critical infrastructure.

For organizations deploying AI in healthcare, education, mobility, employment or law enforcement, this is not merely an additional reporting obligation. It requires a fundamental reassessment of incident response, where not only technical failures but also unexpected model outputs, discriminatory decisions and indirect causal chains must be scrutinized.

What the AI Act precisely means by a "serious incident"

The AI Act defines a serious incident in Article 3(49) as an incident or malfunction that directly or indirectly leads to one of the following outcomes:

Four triggers for reporting obligation

1
Health harm: death or serious injury to persons
2
Infrastructure: serious and irreversible disruption of critical infrastructure
3
Fundamental rights: breach of obligations under EU law protecting fundamental rights
4
Material damage: serious damage to property or environment

The draft guidance clarifies that indirect causality can also fall under the reporting obligation. This is crucial because AI systems rarely cause direct harm, but often function as a link in a decision chain. Latham & Watkins points out that this means an error in diagnostic AI advice that only leads to harm through a subsequent clinical decision does fall under the reporting obligation.

Practical examples by sector

Healthcare and medical diagnostics

A triage tool that systematically underestimates risk patients, causing treatment to start too late, falls under the first trigger. Also, a radiology AI that has lower sensitivity for certain demographic groups and therefore misses abnormalities can lead to reportable health harm. The concept emphasizes that providers must report as soon as the causal relationship can reasonably be assumed, not only after definitive proof.

Education and recruitment

An assessment model that structurally disadvantages certain groups in study placement decisions or a recruitment algorithm that systematically rejects candidates with specific backgrounds can constitute breaches of fundamental rights. Taylor Wessing notes that the AI Act explicitly mentions discriminatory outcomes as a possible trigger, even when there is no technical malfunction in the traditional sense.

Mobility and critical infrastructure

A computer vision system in traffic infrastructure that misclassifies objects and thus causes an irreversible disruption, for example through incorrect signaling or shutdown of traffic control systems. This would fall under the second trigger. Important detail: the disruption must be serious and irreversible, not every temporary glitch.

Who must report and within what timeframes

The reporting obligation primarily lies with providers of high-risk AI systems. As soon as a provider knows, or should reasonably assume, that there is a serious incident with a causal relationship to their system, the clock starts. The draft guidance proposes three different deadlines, depending on severity:

Type of incidentDeadlineInitial report
Widespread breach or disruption of critical infrastructure2 daysIncomplete report allowed
Possible death10 daysIncomplete report allowed
Other serious incidents15 daysIncomplete report allowed

These deadlines are significantly shorter than what many organizations are accustomed to with, for example, annual safety reports. The concept allows providers to first submit an incomplete initial report and supplement later with results from the internal investigation. After the report, a mandatory investigation follows and corrective measures must be considered.

Crucial warning: The concept emphasizes that providers may not modify the system in a way that affects the subsequent investigation without informing the authority. This has direct implications for your patching and update procedures.

The role of deployers

Deployers who detect a serious incident must inform the provider without undue delay. The concept clarifies that this is pragmatically read as within 24 hours. This aligns with existing incident response practices, but does establish this in an AI-specific context and creates a formal information obligation toward the provider.

In practice, this means deployers must be able to detect when an AI system produces unexpected outcomes that may lead to harm, even if the system is technically functioning correctly but encounters unexpected edge cases, for example.

Interplay with other reporting obligations: a complex puzzle

One of the most practical questions organizations have is how the AI Act reporting obligation relates to existing regimes such as GDPR data breach notifications (within 72 hours), NIS2 incident notifications, MDR/IVDR for medical devices, and DORA in the financial sector.

The Commission acknowledges in the concept that double burdens should be avoided. In sectors where equivalent reporting obligations already exist, the concept proposes that the AI reporting obligation can be limited to breaches of fundamental rights and that other consequences are reported through the sector-specific regime. The consultation explicitly asks for practical examples to further refine this.

Practical interplay scenarios

Medical device with AI functionality

The MDR regime remains leading for health safety. An incident with a diagnostic AI system registered as a medical device is primarily reported via MDR. But if the incident also leads to large-scale discriminatory impact (for example systematic underestimation of risk for certain ethnic groups), you must also report through the AI channel due to fundamental rights risks.

Data breach with AI component

A data breach caused by an AI system (for example a misconfigured chatbot leaking personal data) falls under GDPR reporting obligation within 72 hours to the data protection authority. If the same incident also leads to discrimination or other fundamental rights violations, an additional AI Act report may be necessary.

NIS2 critical entity

A NIS2-obliged entity reporting a cybersecurity incident involving an AI system must assess whether, in addition to the technical disruption, there is also AI-specific harm to fundamental rights or safety that justifies a separate AI Act report.

This prevents submitting the same story twice, but does require that you map your internal reporting routes exactly and make a quick assessment per incident of which regimes apply.

The reporting template: what must be in the report

The reporting template now proposed is detailed and enforces traceability. The template asks for, among other things:

Administrative identification Provider and deployer details, contact persons for follow-up, and identification of the competent market surveillance authority.

Technical system identification EU database ID (once the database is operational), classification as high-risk system, version number and configuration, plus date of deployment.

Incident description and causality Factual events in chronological order, when the incident was discovered and by whom, causal relationship between AI system and outcome, direct versus indirect causality, and number of affected persons and severity of impact.

Investigation results Root cause analysis, which system component failed or performed unexpectedly, whether this was a technical malfunction or a design limitation, and whether there were earlier signals or near-misses.

Corrective measures Acute mitigations already taken, planned structural adjustments, implementation timeline, and impact on other deployments of the same system.

The goal is to collect comparable data for supervision and identifying systemic risk trends across different organizations and sectors.

Note: On November 4, 2025, the Commission also published a separate template for GPAI models with systemic risk. This falls under Article 55 and reports go to the AI Office instead of national supervisors. If you have both high-risk systems and GPAI models in your portfolio, you must align both reporting processes.

What this means for incident response in practice

Incident response becomes broader than cybersecurity. Under the AI Act, it also concerns model behavior, erroneous outcomes, and harm to fundamental rights. This requires a multidisciplinary playbook where legal, risk, data science, operations and communications collaborate.

New detection signals

Traditional security monitoring catches technical malfunctions and breaches. For AI incidents, you must also detect signals of model drift (performance deteriorates in production), fairness problems (systematic differences in outcomes between groups), unexpected failure modes (system fails on edge cases not in test set), and unwanted generalization (model extrapolates outside its training domain).

Without these signals, you see the incident too late and miss the deadlines. The concept emphasizes rapid reporting followed by in-depth investigation, which means your detection mechanisms must be real-time or near-real-time for critical applications.

Evidence and reconstruction

The reporting deadlines are short. Without audit trail and traceable logging, you cannot substantiate causality within the required timeframe. Think of preserving model artifacts (which model version was running at the time of incident), inference logs (which input led to which output), training data provenance (origin and characteristics of training data), configuration history (feature flags, hyperparameters, thresholds), human oversight logs (when people intervened and why), and output samples (representative examples of system behavior before and during incident).

The concept also warns against making changes that hamper the investigation without reporting this. This means your change management process must be able to handle a "freeze" for forensic analysis, while simultaneously being able to implement acute risk mitigation.

Three checks you can do today

1. Definitions and thresholds: when does something count as an AI incident?

Establish internally when something counts as a reportable AI incident. Use the four outcomes from the AI Act as a framework and document examples per domain. Explicitly include fundamental rights risks, even when there is no data breach or technical malfunction.

Practical exercise: Take your three most critical AI use cases and answer for each: which of the four triggers (health, infrastructure, fundamental rights, property) could apply? What is a realistic scenario where indirect causality plays a role? Who would detect this incident first (users, monitoring, external complaints)? Within what timeframe must you be able to report (2, 10 or 15 days)?

2. Evidence and logging: can you deliver facts within the deadline?

Test whether with current logs you can deliver sufficient facts for the reporting template within 2, 10 or 15 days. Look not only at IT logging but also at model and use-case logging.

Gap analysis: Can we establish within 24 hours which model version was active? Do we have inference logs that trace input-output pairs? Can we reconstruct whether human oversight was triggered? Is there logging of deviant model behavior (drift detection)? Do we preserve representative output samples for baseline comparison?

Ensure you can make an initial report with basic facts and supplement later with investigation results, as the concept allows.

3. Reporting route and interplay: who calls whom, when?

Map for each AI use case the reporting routes: which supervisor is competent for AI Act reports (likely the national market surveillance authority), which sectoral supervisor (e.g., health inspectorate for healthcare, financial supervisor for finance), and which privacy supervisor for data breaches.

Reporting matrix template

Create a matrix with per use case:

  • Primary AI Act supervisor
  • Sectoral supervisor (if applicable)
  • GDPR supervisor for data breach notifications
  • NIS2/DORA supervisor for critical/financial entities
  • Which template per supervisor
  • Which deadlines apply
  • Who internally is responsible for which report

Include the interplay rules so you don't double-report where the concept recognizes equivalence, and don't miss anything where additional reports are needed.

What does a workable playbook look like?

An effective AI incident playbook has the following components:

Trigger and triage One point of entry where signals arrive (monitoring alerts, user complaints, internal escalations), with triage on three dimensions: safety and health (triggers 1, 2 and 4), fundamental rights (trigger 3), and operational impact. Triage determines which deadline applies and which supervisors must be informed.

Role-based action Provider roles clearly assigned with mandate to decide on reports, including backups for 24/7 availability. Deployers know how and within what timeframe (24 hours) they inform the provider. Legal, data science and operations have pre-coordinated responsibilities in the investigation.

First notice procedure A short format that covers the minimum fields of the EU template, so you can report within the deadline with basic facts. Later you supplement with full investigation results.

Investigation and preservation (forensics) Established retention periods for model artifacts, logs and configurations relevant to the incident. A freeze procedure that automatically secures relevant material once a potentially reportable incident is triggered.

Remediation and communication Set of mitigating measures per incident type. Communication plan toward affected parties (users who may experience impact), supervisors (mandatory reports), and possibly the public with widespread impact.

Lessons and updates After completion, reassess use case risks based on lessons learned. Update FRIA and DPIA with new risk insights. Adjust training data or model choices if the incident revealed a structural problem.

How to respond effectively to the consultation

The Commission explicitly asks for practical examples and interplay case studies. This is your opportunity to make the final guidance workable for your sector and use cases.

Suggestions for your response

Clarify indirect causality Ask for clear examples when an indirect relationship is sufficient and how this relates to the burden of proof in the template. Provide a sector-specific example from your domain where the causal chain is complex.

Discuss deadline feasibility Explain how you practically meet the deadlines with a first-notice approach and what data you can realistically deliver within 2, 10 or 15 days versus what requires longer investigation.

Provide interplay examples Describe scenarios where you do or do not also report under GDPR, MDR, NIS2 or DORA and what the bottlenecks are.

Sector-specific complexity If your sector has specific challenges, describe this with a concrete example and propose pragmatic solutions.

The consultation closes Friday, November 7, 2025. Responses can be submitted via the European Commission's Have Your Say portal.

Why act now

The reporting obligations only apply from August 2026, but the impact on your processes, tooling and governance is immediate. Setting up adequate logging, monitoring and incident response procedures for AI systems takes months. Teams must be trained, playbooks tested, and tooling adapted. Starting in 2026 means you'll be improvising ad-hoc during the first months of incidents.

Use the concept template to do a gap analysis of your current data provision and responsibilities. If you offer or integrate GPAI models with systemic risk, align the new GPAI reporting template with your high-risk process, so you have one coherent framework.

Three concrete next steps

1. Determine scope Make an inventory of which of your AI use cases fall under high-risk according to Annex III of the AI Act. Determine for each use case who legally is the provider versus the deployer.

2. Simulate an incident and test the clock Choose a realistic incident scenario for your most critical AI system. Walk through the playbook and measure whether you can report the required data within 2, 10 or 15 days. Identify gaps and make a plan to close them.

3. Submit a response to the consultation Use your sector expertise to help the Commission make the guidelines workable. One or two concrete cases are more valuable than abstract comments. The consultation closes Friday, November 7, 2025.


Sources and further reading