The European Commission missed a crucial deadline on 2 February. The consequences for organisations preparing for the AI Act are bigger than they appear at first glance.
On 2 February 2026, a deadline passed that received little media attention but is hugely relevant for thousands of organisations across Europe. The European Commission was supposed to publish guidelines on Article 6 of the AI Act, the article that determines when an AI system is classified as "high-risk." Those guidelines never materialised. The compliance deadlines, however, haven't moved a single day.
What does this mean in practice? Organisations must comply with extensive requirements for high-risk AI systems by 2 August 2026, but now lack the official guidance to determine exactly which systems fall under that category. It's like having to pass an exam while the syllabus hasn't been published yet.
The missed deadline: Article 6 guidance
Article 6 of the AI Act is the linchpin of the entire regulation. Together with Annex III, it determines which AI systems are classified as high-risk and therefore subject to the most demanding compliance obligations: risk management, technical documentation, human oversight, and conformity assessments.
Under Article 96(1), the Commission was required to publish guidelines on the practical application of this article by 2 February 2026 at the latest. That didn't happen. According to Digital Watch Observatory, feedback is still being processed, with a revised draft expected later this month. Final adoption may slip to spring.
Note: The 2 February 2026 deadline for Article 6 guidance was a legal obligation of the Commission under Article 96(1) AI Act. Its non-publication changes nothing about the obligations for providers and deployers of high-risk AI systems.
The problem is twofold. First, organisations don't know precisely how to assess whether their AI system falls under the high-risk category, particularly regarding the "filter" in Article 6(3), which determines when a system listed in Annex III is nonetheless not considered high-risk. Second, notified bodies lack the framework to base their conformity assessments on.
Digital Omnibus: a delay that may never arrive
Parallel to the guidance delay, there's a second source of uncertainty: the Digital Omnibus proposal presented by the Commission in November 2025. This proposal suggests postponing the obligations for high-risk AI systems under Annex III until 2 December 2027 at the latest, sixteen months later than the current deadline.
But there are important caveats. As Taylor Wessing analyses, this isn't an automatic delay. The Commission would first need to confirm that "adequate support measures" are available. Osborne Clarke points out that the delay would only apply to systems under Article 6(2) and Annex III, not to the broader AI Act obligations.
And crucially: the Digital Omnibus is a proposal, not law. It still needs to pass through the European Parliament and Council. Compliance Corylated reports, based on a Clifford Chance briefing, that the proposal will be "hard fought", and that the Omnibus itself must be approved before August 2026 to have any effect on the deadline.
Practical implication: Plan your compliance trajectory based on the current deadline of 2 August 2026. If the Omnibus leads to a delay, you've gained extra time. If it doesn't pass or comes too late, you're prepared.
SME exemptions: relief or fig leaf?
The Digital Omnibus also contains proposals for simplification benefiting small and medium-sized enterprises. Think simplified quality management systems and reduced documentation requirements. The Commission has allocated €950 million from the Digital Europe Programme to support 2,400 SMEs through regulatory sandboxes and conformity assessment assistance.
Sounds good. But the reality is that the core high-risk obligations (risk management, transparency, human oversight) remain intact for SMEs as well. The simplifications primarily address procedural burdens, not substantive requirements. Anyone offering a high-risk AI system must demonstrate it is safe and reliable, regardless of company size.
Signatory Taskforce: GPAI Code of Practice gains teeth
While the high-risk side of the AI Act faces delays, the GPAI side is accelerating. The EU AI Office has established a Signatory Taskforce under the General-Purpose AI Code of Practice. The Taskforce brings together companies that have signed the Code and serves as a forum for the practical interpretation of GPAI obligations.
As BABL AI reports, the transparency, safety, and accountability requirements for GPAI providers have been in effect since 2 August 2025, but enforcement begins in August 2026. The Taskforce aims to ensure consistent interpretation before that enforcement date.
Specifically: GPAI providers must publish a public summary of their training data by August 2026, based on a Commission template. This directly intersects with the copyright debate and the AI Act's transparency obligations.
2026: the year of enforcement
Let's be honest: 2025 was the year of preparation. The bans on unacceptable AI practices have been in effect since 2 February 2025, GPAI rules since August 2025. But real enforcement? That begins in 2026.
As PYMNTS reports, 2026 creates a "far more demanding global regulatory climate". Not just in the EU, but also in the US (Colorado AI Act from June 2026, California ADMT rules) and China. Organisations increasingly operate in a web of overlapping AI obligations.
In Europe, 72 national market surveillance authorities are becoming active, with an expected 1,600 complaints per year. Fines are substantial: up to €35 million or 7% of global annual turnover for the most serious violations.
Documentation gaps are violations in themselves. Under the AI Act, the absence of required technical documentation, logging, or conformity declarations constitutes an independent violation, regardless of whether the AI system otherwise functions correctly. Those who wait for guidance before starting documentation risk fines of up to €15 million or 3% of turnover.
Why you must start now, not later
It's tempting to wait. The guidance is delayed. The Omnibus may offer a postponement. Standards aren't finalised. But that very uncertainty is an argument for acting now, not later.
Regulativ.ai draws the comparison with GDPR: 85% of organisations fined in the first two years of GDPR claimed they "didn't have clear guidance." That turned out not to be a valid defence. The same dynamic threatens with the AI Act.
What can you do now, even without final guidance?
1. AI system inventory. Map out which AI systems your organisation develops, deploys, or uses. This is always step one, regardless of what guidance emerges.
2. Preliminary risk classification. The text of Article 6 and Annex III is available. You can make a preliminary assessment based on that, even without the guidelines. Flag borderline systems for reclassification once guidance appears.
3. Build documentation. Technical documentation, risk management systems, logging: these requirements are in the AI Act itself and don't depend on further guidance. Start building now.
4. Establish governance structures. Assign responsibilities, set up an AI governance team, define escalation procedures. This takes time and is organisational, not dependent on regulatory details.
5. Check your GPAI obligations. If you provide or integrate general-purpose AI models, obligations have applied since August 2025. Verify you comply with transparency requirements and training data summaries.
The bigger picture: a web of AI legislation
The EU AI Act doesn't exist in isolation. In 2026, organisations face a convergence of AI regulation worldwide:
- EU: Full AI Act enforcement from August 2026, Digital Omnibus under review, GPAI Code of Practice active
- US: Colorado AI Act (June 2026), California ADMT rules (preparing for 2027), aggressive enforcement by state attorneys general
- International: China's AI rules are tightening, the UK is working on its own AI Bill
As Wilson Sonsini notes, 2026 is the year AI regulation shifts from policy to enforcement. Organisations operating internationally must approach compliance holistically.
Conclusion: five action items for this month
The guidance delay is frustrating, but not an excuse for inaction. The AI Act deadlines stand. Enforcement is coming. The fines are real. Here are your five priorities for February 2026:
- Conduct an AI inventory if you haven't already: every AI system, every use case, every vendor
- Make a preliminary high-risk assessment based on Article 6 and Annex III. Don't wait for guidance
- Start technical documentation. This is the most time-consuming obligation and the most likely source of fines
- Monitor the Digital Omnibus, but plan against the existing August 2026 deadline
- Check your GPAI obligations. Transparency requirements already apply, enforcement begins in six months
The Commission may be late with its homework. You can't afford to be.