Practical preparation for AI regulatory sandboxes under the EU AI Act
Consultation Open: On December 2, 2025, the European Commission opened a consultation on a draft implementing act for AI regulatory sandboxes. Feedback is possible until January 13, 2026. Member states must have at least one national sandbox operational by August 2, 2026.
What the EU Is Trying to Achieve with Sandboxes
The AI Act explicitly links sandboxes to innovation in the development and pre-market phase, but with a clear boundary: testing must contribute to compliance with the AI Act and other relevant law. In the recitals, this is positioned as a controlled test environment that supports innovation while bringing compliance closer.
That "controlled" is not an empty word in the law. Article 57 describes sandboxes as a framework in which competent authorities not only supervise but also provide guidance and support, with specific attention to risks, including fundamental rights, health, and safety.
Why This Implementing Act Matters
Many organizations doing pilots recognize the pattern: a PoC starts quickly, data and process choices are made pragmatically, and only later does discussion arise about accountability documentation, role distribution, involved supervisors, and stop criteria. A sandbox is meant to reverse that: making agreements upfront about scope, safeguards, and evidence, and iterating during the test under an agreed regime.
Common Rules for All of the EU
The Commission explicitly announces that it will adopt an implementing act to establish common rules for the establishment and operation of sandboxes. This makes the question less: "will there be a sandbox?" and more: "what procedure and what minimum set of agreements will probably apply everywhere?"
What Article 57 Already Establishes, and What You Can Prepare Now
Even without the implementing act, you can base your preparation on Article 57 itself, because the core mechanisms are already there.
1. A Specific Sandbox Plan as Entry Ticket
The law assumes a specific plan and conditions for participation. That plan is not optional but the basis for guidance and supervision. In practical terms, this is a dossier that enables you to explain upfront: what you're testing, why, with what data, with what mitigations, and when you stop.
2. Output You Can Use Later
Article 57 mentions two deliverables that are often underestimated:
Written proof
Written evidence of successfully completed activities, to be provided upon request.
Exit report
Report with activities, results, and learning outcomes at the end of the sandbox.
The law states that these documents must be weighed "positively" by market surveillance authorities and notified bodies, with the aim of expediting conformity procedures to some extent.
Strategic Advantage: The sandbox is not just a test environment but also a way to structure evidence that is useful later for conformity procedures.
3. Protection Against Administrative Fines, But Not Against Everything
If the (prospective) provider follows the sandbox plan and in good faith follows the authority's guidance, authorities may not impose administrative fines for infringements of the AI Act within that sandbox context.
Note: Liability for damage to third parties remains. Article 57 explicitly states that participation does not exempt you from liability under applicable liability law. Sandbox participation is not a "legal safety net" but a regime in which you get room to learn and improve under supervision.
4. Real Supervision, Including Stop Buttons
The law gives authorities the power to temporarily or permanently suspend testing or participation if effective mitigation is not possible, and to inform the AI Office about this. This means your test setup must explicitly show how you monitor risks and what interventions you can make if something goes wrong.
5. Involvement of Privacy Supervision Where Personal Data Is Involved
Article 57 links sandbox supervision to the involvement of other relevant authorities, including data protection authorities, when personal data is processed. If your sandbox case uses personal data, the privacy component should not be an afterthought attachment but an integral part of the plan.
A Realistic Case: Debt Services and Early Warning
Suppose: an organization develops an AI system that helps municipalities recognize early signals of problematic debt based on multiple data sources. The goal is prevention: faster contact, less escalation, more customization.
Without a sandbox, friction quickly arises. The developer cannot test properly without context and data, the municipality doesn't want an experiment that leads to uncontrollable bias, untraceable signals, or a workflow that blinds employees to nuance. Moreover, personal data and possible fundamental rights effects are directly at play.
A sandbox plan then forces choices that you would otherwise make late:
Limited scope
Which neighborhoods, which target group, which period?
Role distribution
Who is responsible for what in the chain?
Human decision point
Where does the employee decide, where does the system advise?
Measurement points
Error margins, fairness metrics, stop criteria.
In such a setup, the "sandbox" is not just a label but a set of agreements about controlled real-world testing, supervision, and demonstrability.
What You Can Set Up in 2025 to Be "Sandbox Ready" in 2026
The biggest gain often lies not in waiting for national counters but in preparing your own dossier and working method.
Work with One Central Description of the Test
A good sandbox plan is readable for lawyers, product teams, and supervisors. It contains at least:
- Objective and intended effects
- Scope and limitations
- Context of use
- Data deployment
- Model and system components
- Human role in the chain
- The way outputs are used
A plan consisting only of technical notes will cause friction in practice. Article 57 requires a specific plan basis and guidance.
Make Mitigations Measurable
Supervision and guidance have little use for intentions. If you want to mitigate bias, describe your tests (for example per subgroup), acceptance criteria, and how you implement adjustments. If you want to improve explainability, describe what explanation you give to which user, and how you verify that explanation is understood in practice.
Establish the Stop Buttons Before You Start
Because authorities can suspend if mitigation proves ineffective, you want to have internal "stop logic" already:
- What signals trigger escalation?
- Who decides to pause?
- How do you roll back?
- How do you ensure the environment doesn't continue unnoticed?
This is also important for partners: a municipality, hospital, or school participating wants to know there's a brake that actually works.
Integrate Privacy Supervision and DPIA Work into the Sandbox Plan
When personal data is processed, the privacy component should not run separately from sandbox governance. In practice, it helps to anchor your data flows, retention periods, minimization choices, access management, and rights handling already in your plan.
Design Your Documentation as if You'll Need to Deliver an Exit Report Later
The exit report is not a theoretical document. It must describe activities, results, and learning outcomes. If you already keep a consistent log in your daily work of assumptions, tests, incidents, changes, and decisions, the exit report becomes a summary instead of a reconstruction.
Timeline and Next Steps
| Date | Milestone |
|---|---|
| December 2, 2025 | Consultation opened on draft implementing act |
| January 13, 2026 | Deadline consultation feedback |
| August 2, 2026 | Deadline national sandboxes operational |
Practical Roadmap: Choose one pilot suitable for controlled testing, build a plan covering technology, governance, and data, define measurable mitigations and stop criteria, and set up your documentation so you can deliver an exit report without stress. This way you're not dependent on the final details of the implementing act but align with the core of Article 57.
📚 Deepen Your Knowledge: Check out the Complete EU AI Act Guide for a full overview of all aspects of AI legislation, including more about AI regulatory sandboxes.