Responsible AI Platform
🏭Critical Infrastructure

The Story of GridSafe

How a grid operator discovered their AI could literally switch off the power

Fictional scenario β€” based on realistic situations

Scroll
01

The Trigger

How it started

πŸ“§

GridSafe had used AI for grid management for years. The system predicted failures, optimized load balancing, and sometimes made autonomous decisions to prevent grid overload. It worked excellently β€” until the AI Act came.

Critical infrastructure is explicitly named as high-risk in the AI Act. AI systems that can disrupt essential services β€” energy, water, transport β€” fall under the strictest requirements. And GridPredict AI actually had the capability to shut down parts of the grid.

β€œ
"If our AI makes a mistake, hundreds of thousands of households sit in the dark." The technical director only realized the implications when he read Annex III.
02

The Questions

What did they need to find out?

1Question

Why is energy AI critical infrastructure under the AI Act?

The team went through the legal text. Annex III explicitly mentions AI systems intended for use as a "safety component" in the management of critical infrastructure, including electricity supply.

πŸ’‘ The insight

The rationale is clear: an error in energy AI can be society-disrupting. Blackouts affect hospitals, traffic lights, refrigeration. The AI Act recognizes that AI in this context is as critical as the physical infrastructure itself.

🌍 Why this matters

Europe has experienced multiple large-scale blackouts. The increasing complexity of the grid β€” with decentralized generation, EV charging, and dynamic pricing β€” makes AI essential but also risky. Regulators like ACM and RDI follow this closely.

2Question

What controls do we need to build in?

GridPredict AI sometimes made autonomous decisions: it could shut down transformers, redistribute load, or activate emergency protocols. The question was: which of these actions should require human approval?

πŸ’‘ The insight

The team developed an impact matrix. Reversible actions with low impact could be automatic. Actions affecting more than 1000 households required human confirmation. Emergency shutdowns needed two-person approval β€” except in immediate danger.

🌍 Why this matters

The AI Act asks for proportional human oversight. This doesn't mean AI can never act autonomously, but that controls are proportionate to risk. For critical infrastructure, that bar is high.

3Question

How do we guarantee human oversight on real-time decisions?

The dilemma was timing. Some grid problems require response within milliseconds. No human can decide that fast. But the AI Act asks for human oversight. How do you solve that?

πŸ’‘ The insight

The solution was a combination of pre-approval and post-hoc review. Human operators set the parameters within which the AI could operate. Every autonomous decision was logged for daily review. Deviations trigger alerts.

🌍 Why this matters

This is a fundamental tension in AI governance: speed versus control. The AI Act recognizes this by focusing on "meaningful" oversight β€” not necessarily real-time, but effective. It's about the system, not every individual decision.

4Question

What if the AI causes a cascade effect?

The nightmare scenario: the AI makes a decision that, through unforeseen interactions, causes a cascade of failures. A local shutdown becoming a regional blackout. How do you mitigate that?

πŸ’‘ The insight

The team implemented "blast radius" limits. Every AI decision had a maximum impact it could cause. On top came an independent monitoring system that evaluated AI decisions in real-time and could override if patterns indicated cascades.

🌍 Why this matters

Cascade effects are a recognized risk in complex systems. The AI Act requires robust risk management, including scenario analysis of worst-case situations. For energy infrastructure, this means collaboration with regulators and grid operators across Europe.

03

The Journey

Step by step to compliance

Step 1 of 6
πŸ”

The scope analysis

An external audit identified three AI systems falling under Annex III. GridPredict AI was the most critical.

Step 2 of 6
πŸ“Š

Developing impact matrix

The team categorized all possible AI actions by impact and reversibility. This formed the basis for the governance framework.

Step 3 of 6
πŸ‘οΈ

Human oversight redesign

The control system was revised. Which decisions could the AI make, which required human approval?

Step 4 of 6
πŸ›‘οΈ

Blast radius implementation

Technical limits were built in preventing any single AI decision from having too much impact.

Step 5 of 6
πŸ“ˆ

Monitoring system

An independent system was implemented that monitors AI decisions in real-time and detects anomalies.

Step 6 of 6
πŸ›οΈ

Regulator engagement

GridSafe proactively shared their approach with ACM and the Ministry of Economic Affairs. Transparency built trust.

04

The Obstacles

What went wrong?

Obstacle 1

βœ— Challenge

AI sometimes needed to react faster than humans can decide

↓

βœ“ Solution

Pre-approval parameters with post-hoc review and anomaly detection

Obstacle 2

βœ— Challenge

Risk of cascade effects from AI decisions

↓

βœ“ Solution

Blast radius limits and independent real-time monitoring

Obstacle 3

βœ— Challenge

No clear standards for AI in critical infrastructure

↓

βœ“ Solution

Proactive collaboration with regulators to develop best practices

β€œ
The AI Act forced us to think about scenarios we'd rather ignore. Our system is now safer β€” not despite regulation, but because of it.
β€” Ir. Marcus Hendriks, CTO, GridSafe
05

The Lessons

What can we learn from this?

Les 1 / 4
⚑

Energy AI is critical infrastructure

AI that can disrupt essential services falls under the strictest AI Act requirements.

Les 2 / 4
⏱️

Speed and control can coexist

Pre-approval parameters with post-hoc review provides meaningful oversight without real-time bottlenecks.

Les 3 / 4
πŸ›‘οΈ

Limit the blast radius

Technical limits on AI impact prevent single decisions from having catastrophic consequences.

Les 4 / 4
🀝

Collaborate with regulators

Proactive engagement builds trust and helps shape standards.

Does your AI manage critical infrastructure?

Discover what the AI Act means for energy, water, transport and other essential services.