"Do we need to register this AI system?" is one of those questions you hear in almost every AI governance discussion. There are three different things meant by "registration". This blog brings the registration obligations under the EU AI Act back to basics.
A holistic overview of EU AI Act developments in 2025: phased implementation, political milestones, national implementation, business reactions, and an outlook on crucial deadlines in 2026.
The first draft of the Code of Practice on transparency in AI content has been published. This article shows how to implement Article 50 in processes, tooling and UI, with a clear separation between provider and deployer obligations.
The European Commission has published a first draft of a Code of Practice on transparency around AI-generated and AI-manipulated content. This article analyzes the requirements for providers and deployers, and offers practical guidance for implementation.
AGI is not a well-defined concept, but a spectrum of increasingly broad and autonomous AI systems. The EU AI Act doesn't regulate AGI as a label, but does cover it through general-purpose AI rules and risk-based requirements. How do you implement AGI-like systems responsibly?
AI alignment sounds like a technical topic for labs and researchers, but in practice it affects executives, regulators and product teams daily: does an AI system truly do what we intend? Whether Europe adequately tackles this with legislation depends on which layer of alignment you mean.
The European Commission has opened a consultation on the setup and operation of AI regulatory sandboxes. This article analyzes Article 57 and provides practical guidance for sandbox preparation toward August 2026.
The Digital Omnibus aims to streamline Europe's fragmented digital rulebook. When a draft version leaked, organizations raised alarms. On November 19, the Commission published the official proposals. What survived and what was adjusted?
The European Commission is working on the Digital Omnibus to simplify digital legislation. But civil society organizations warn: this goes beyond technical adjustments. Analysis of leaked drafts reveals changes to GDPR, AI Act and e-Privacy that could weaken fundamental protection.
Seven lawsuits against OpenAI expose how ChatGPT allegedly encouraged vulnerable users toward suicide and reinforced delusional thinking. Claims range from wrongful death to product liability, raising questions about the speed of GPT-4o's release and the absence of crisis detection.
The European Data Protection Supervisor published a revised version of its GenAI guidance on October 28, 2025. With a practical compliance checklist, clear role definitions, and concrete guidance from development to management, it becomes clearer what organizations must do when deploying generative AI.
CEN and CENELEC have taken exceptional measures to deliver core standards for the EU AI Act faster. With a clear timeline toward 2026, compliance is becoming increasingly concrete. Discover what this means and how you can prepare today.
Until November 7, 2025, the European Commission is requesting feedback on the draft guidance and reporting template for reporting serious AI incidents. For organizations deploying AI in safety-critical domains, this is THE moment to participate in shaping the harmonized reporting chain.
The EU AI Act is getting a Scientific Panel of 60 independent experts who will lay the technical foundation for policy and supervision from 2026. What does this scientific advisory layer mean for organizations working with GPAI and high-risk AI?
The Dutch Data Protection Authority tested four AI chatbots as voting guides and discovered an alarming pattern: more than half of all recommendations went to just two parties, regardless of the voter profile entered. What does this mean for organizations offering chat functionality?
The European Commission and EDPB published joint guidelines clarifying for the first time how the Digital Markets Act and GDPR intersect. For gatekeepers like Meta, Google, and Apple, this represents a fundamental shift in how they must obtain consent and combine data.
While penalty provisions have been in effect since August 2025, reality shows a fragmented picture of enforcement readiness. An analysis of compliance gaps, national authorities, and practical preparation steps.
2025 marks a pivotal year in AI Governance: from experimental frameworks to operational compliance. An analysis of dominant trends, practical challenges, and strategic opportunities for organizations.
The EU has published a Code of Practice for general-purpose AI (GPAI) that helps providers demonstrably comply with transparency, copyright and safety requirements. Formally voluntary, but practically a translation of AI Act obligations. Learn how to anchor this contractually with concrete clause examples.
The European Commission has published an official template for model providers to publish a public-friendly summary of their training content. A practical guide to comply with transparency obligations for general-purpose AI models.
The European Commission withdrew the AI Liability Directive in February 2025 due to lack of consensus. This has significant implications for AI liability within the EU.
Need a DPIA or FRIA? Discover the 5 key differences between these impact assessments + download practical checklist. Complete comparison table for EU AI Act compliance.
The European Commission has officially recognized the voluntary code of practice for AI models as a legitimate compliance instrument under the AI regulation. While OpenAI, Google, xAI and Anthropic sign, Meta refuses participation and the US criticizes the European approach.
On July 10, 2025, the European Commission published the definitive General-Purpose AI Code of Practice. An analysis of which companies are signing, refusing, or still deliberating – and why.
The Dutch Data Protection Authority concludes in its semi-annual report that AI systems for emotion recognition are built on 'disputed assumptions' and pose risks for discrimination and privacy violations.
On July 10, 2025, the European Commission published the definitive Code of Practice for general-purpose AI models. A practical analysis of the new transparency, safety and copyright obligations.
Why can we often not understand AI? This blog dives into the 'black box' of AI and explains why explainability (XAI) is essential for trust, fairness, and control, especially in light of the EU AI Act.
From February 2, 2025, all employees working with AI must be sufficiently AI literate. Learn how to prepare your organization in time for this new EU AI Act requirement.