Responsible AI Platform
🔒 NIS2 × AI Act

NIS2 & AI Act

Cybersecurity and AI compliance as one challenge

Zahed AshkaraUpdated: June 2026~12 min read

Why NIS2 and the AI Act must be considered together

AI systems are no longer a software layer on top of existing processes. In the energy sector they control distribution networks. In healthcare they support triage and diagnostics. In transportation they optimise traffic flows and planning processes. This makes AI systems critical digital infrastructure — and that is precisely what NIS2 regulates.

The Network and Information Security Directive 2 (Directive (EU) 2022/2555) imposes obligations on essential and important entities regarding cybersecurity risk management, incident notification and supply chain security. The AI Act imposes obligations on the same sectors regarding technical robustness, incident reporting and oversight of providers. The two frameworks contain no explicit cross-reference — yet the overlaps are substantial.

For organisations that are both NIS2 entities and deploy high-risk AI — and there are many — a split approach inevitably leads to duplicate work, gaps and increased compliance risk. The smartest approach is integrated: one risk assessment, two legal frameworks, and clear ownership structures that cover both sets of obligations.

What is NIS2?

NIS2 is the revised EU network and information security directive, in force since October 2024 in the Netherlands through the Cybersecurity Act. The directive applies to essential entities (energy, transport, banking, drinking water, health, digital infrastructure, space) and important entities (postal services, waste management, chemicals, food, manufacturing, digital providers, research). Together these number tens of thousands of organisations across the EU.

NIS2 requires entities to maintain a documented cybersecurity risk management system, report incidents to the national authority (NCSC or sector CSIRT) within strict timelines, and actively manage supply chain security. Notably, NIS2 explicitly holds the management body accountable: members of the governing body can be personally liable if there is demonstrable failure to pay adequate attention to cybersecurity.

In the Netherlands, the National Cyber Security Centre (NCSC) supervises essential entities; sectoral regulators — including ACM, DNB and IGJ — supervise important entities in their respective sectors. Fines can reach €10 million or 2% of global annual turnover for essential entities, and €7 million or 1.4% for important entities.

The 5 intersections of NIS2 and AI Act

Below we describe the five concrete points where NIS2 and the AI Act connect. For each intersection we explain what the obligation entails, which articles apply, and what this means for your compliance approach.

01

Cybersecurity risk management × AI system security

NIS2 Art. 21 × AI Act Art. 15

Article 21 of NIS2 requires essential and important entities to take appropriate and proportionate technical, operational and organisational measures to manage risks to network and information systems. This includes information security policy, risk analysis, business continuity and supply chain security. Crucially, NIS2 applies the principle of "security-by-design" — security must be built into systems, not added as an afterthought.

Article 15 of the AI Act imposes a similar requirement on high-risk AI systems: they must be resilient against attempts by unauthorised third parties to alter their use, outputs or performance. This includes protection against adversarial attacks, data poisoning and other AI-specific threats. In sectors such as healthcare, energy and transport — sectors that are typically NIS2 entities — this means that AI systems controlling critical processes must simultaneously meet both security standards. A joint architecture review and integrated security test saves considerable effort.

In practice: In practice: integrate AI-specific threat analysis (adversarial attacks, data poisoning, model inversion) into your NIS2 risk assessment. This allows you to conduct one security evaluation that covers both frameworks, rather than two separate tracks.

02

Incident reporting

NIS2 Art. 23 × AI Act Art. 62

Article 23 of NIS2 requires entities to report significant incidents to the competent authority within strict timelines: an early warning within 24 hours of identifying the incident as significant, a detailed incident notification within 72 hours, and a final report within one month. An incident is "significant" if it causes serious operational disruptions or financial losses, or significantly affects other entities or persons.

Article 62 of the AI Act requires providers of high-risk AI systems to report serious incidents to the national market surveillance authority. Deployers — the organisations using the system — must notify the provider. A "serious incident" under the AI Act includes incidents that pose a risk to the health, safety or fundamental rights of persons, or that cause material damage to property. In practice, one and the same event — an AI system making incorrect decisions in a hospital due to a cyberattack — can simultaneously qualify as both a NIS2 incident and an AI Act incident.

In practice: In practice: design a single incident response process that covers both reporting obligations. Establish in advance which types of AI-related incidents qualify as "significant" (NIS2) and "serious" (AI Act), who files the reports, and to which authorities.

03

Supply chain security

NIS2 Art. 21(2)(d) × AI Act provider/deployer chain

Article 21(2)(d) of NIS2 explicitly requires entities to address supply chain security, including the security aspects of relationships with direct suppliers and service providers. This means: due diligence on suppliers, contractual security requirements, and periodic assessment of their security level. For digital service providers — including cloud providers, managed service providers and AI providers — additional requirements apply.

The AI Act structures responsibilities through a provider/deployer chain. The provider (developer) is responsible for technical documentation, conformity assessments and registration. The deployer (user of the system) is responsible for correct use, human oversight and passing incidents to the provider. For a NIS2 entity that procures a high-risk AI system from an external provider, both NIS2 supplier obligations (supply chain security) and AI Act deployer obligations (governance, oversight, incident notification) apply. Both sets of obligations must be anchored in supplier contracts.

In practice: In practice: add both the NIS2 security requirements and the AI Act provider obligations as contractual clauses to your standard supplier contracts for AI systems. Conduct an annual combined supplier assessment that covers both frameworks.

Read more about AI procurement and contract obligations →
04

Governance & management body accountability

NIS2 Art. 20 × AI Act Art. 26

Article 20 of NIS2 is remarkable in the European regulatory landscape: it explicitly holds the management body (board of directors, executive board) responsible for compliance with cybersecurity obligations. Board members are required to follow training to have sufficient knowledge of cybersecurity risks. In cases of demonstrable negligence, board members can be personally liable. This represents a fundamental shift of cybersecurity from a technical to a board-level responsibility.

Article 26 of the AI Act imposes comparable governance obligations on deployers: deployers must ensure that high-risk AI systems are used correctly, that there is human oversight, and that involved staff are sufficiently trained. Although the AI Act does not introduce explicit board member liability as NIS2 does, the expectation of regulators is clear: AI governance must be placed at board level. Organisations combining NIS2 and AI Act obligations effectively have no choice: the board must demonstrably be involved in both cybersecurity and AI governance.

In practice: In practice: treat NIS2 board accountability and AI Act governance obligations as one governance dossier. Establish an integrated AI and cybersecurity governance framework, report on it periodically to the board, and document this in board minutes.

Read more about Art. 26 deployer obligations →
05

Sector overlap: essential entities × high-risk AI

NIS2 sectors × AI Act Annex III high-risk categories

NIS2 defines essential entities in eleven sectors: energy, transport, banking, financial market infrastructure, drinking water and wastewater, health, digital infrastructure, ICT service management, space and public administration. The AI Act defines high-risk AI systems in Annex III based on application and sector — and the overlap with NIS2 sectors is striking.

Hospitals using AI for triage are essential NIS2 entities and deployers of high-risk AI (Annex III, point 5: life and critical private and public services). Energy companies using AI for network management are essential NIS2 entities and deployers of high-risk AI (Annex III, point 2: critical infrastructure). Municipalities using AI for benefit decisions are NIS2-bound public administrations and deployers of high-risk AI (Annex III, point 8: government authorities). The sector overlap is no coincidence: both frameworks target applications with significant societal impact.

In practice: In practice: conduct a combined scoping analysis: which systems are simultaneously NIS2 network and information systems and high-risk AI systems? Those systems require attention under both frameworks and deserve priority in your compliance roadmap.

What do you need to double-regulate?

Below is a practical checklist of measures that both NIS2 and the AI Act require. These are the points where an integrated approach delivers the most value.

Combined risk assessment for AI systems in critical processes

Art. 21 NIS2 (cybersecurity risk management)⚖️ Art. 9 AI Act (risk management system for high-risk AI)

Security measures addressing AI-specific threats

Art. 21(2)(h) NIS2 (security of network and information systems)⚖️ Art. 15 AI Act (accuracy, robustness and cybersecurity)

Integrated incident response procedure for AI-related incidents

Art. 23 NIS2 (incident notification within 24/72 hours)⚖️ Art. 62 AI Act (notification of serious incidents for high-risk AI)

Contractual security and governance requirements for AI suppliers

Art. 21(2)(d) NIS2 (supply chain security)⚖️ Art. 26 AI Act (deployer obligations, including provider oversight)

Board reporting on AI governance and cybersecurity risks

Art. 20 NIS2 (management body accountability)⚖️ Art. 26 AI Act (human oversight and deployer governance)

Scope analysis: which systems are NIS2-bound and high-risk AI

Art. 2-3 NIS2 (scope of essential/important entities)⚖️ Art. 6 + Annex III AI Act (classification of high-risk systems)

Technical documentation and audit logs for AI in critical infrastructure

Art. 21(2)(j) NIS2 (log files and monitoring)⚖️ Art. 11 + 12 AI Act (technical documentation and logging)

Awareness training for board and operational staff

Art. 20(2) NIS2 (mandatory training for management body)⚖️ Art. 26(6) AI Act (sufficient AI literacy for users)

How compliant are you on both fronts?

The AI Readiness Score tests your organisation specifically on the intersections between the AI Act and related legislation such as NIS2. You get a score per theme and concrete recommendations.