Digital autonomy in AI: what it means for orgs
Digital autonomy determines how organizations control AI decisions, manage vendor lock-in, and ensure human oversight remains meaningful in practice.
Read moreCategory
Articles about AI governance, responsible AI use, and organizational implementation of AI regulations.
47 articles
Digital autonomy determines how organizations control AI decisions, manage vendor lock-in, and ensure human oversight remains meaningful in practice.
Read moreDutch DPA warns the proposed police surveillance law lacks clear limits, risking mass monitoring of innocent citizens without suspicion. Key findings.
Read moreThe Dutch DPA publishes RAN 6: 4 of 9 indicators are now red. AI in recruitment, transparency, and AI Act preparation fall short.
Read moreThe Pentagon demands that Anthropic remove two safety limits: no mass surveillance of Americans, no fully autonomous weapons. Dario Amodei refuses.
Read moreMore than 100 international experts, led by Turing Award winner Yoshua Bengio, published the most comprehensive AI safety report to date.
Read more80% of Fortune 500 firms use AI agents, but only 1 in 5 has mature governance. This 2026 guide covers what controls enterprises need to implement.
Read moreThe GPAI Code of Practice Signatory Taskforce sets the rules for AI models like GPT and Gemini. What does this mean for organizations building or using AI?
Read moreAnthropic research shows AI assistants can disempower users in edge cases. Learn what triggers it, which roles are most at risk, and how to prevent it.
Read moreOver 70% of banks use agentic AI, but governance lags behind. This 2026 guide covers what financial institutions must implement before supervisors act.
Read moreEU AI Act requires registration of high-risk AI systems. Learn who must register, what's required, and how it relates to the Dutch Algorithm Register.
Read moreEU AI Act enforcement starts August 2026. Full 2025 timeline, key deadlines, fines up to €35M, and what your organization must prepare now.
Read moreAugust 2026 deadline: Article 50 requires labeling of AI-generated content. Provider vs. deployer duties, watermarking options, detection APIs explained.
Read moreEU Commission's first draft Code of Practice on AI content transparency sets new labeling rules for AI-generated and AI-manipulated content.
Read moreAGI is not a well-defined concept, but a spectrum of increasingly broad and autonomous AI systems.
Read moreAI alignment sounds like a technical topic for labs and researchers, but in practice it affects executives, regulators and product teams daily: does an.
Read moreEU Commission's December 2025 consultation on AI regulatory sandboxes sets the framework. Here's what organizations need to prepare right now.
Read moreThe Digital Omnibus aims to streamline Europe's fragmented digital rulebook. When a draft version leaked, organizations raised alarms.
Read moreEU Commission's Digital Omnibus promises simplification but risks weakening GDPR and AI Act protections. What organizations need to track in 2025.
Read moreSeven lawsuits against OpenAI expose how ChatGPT allegedly encouraged vulnerable users toward suicide and reinforced delusional thinking.
Read moreThe European Data Protection Supervisor published a revised version of its GenAI guidance on October 28, 2025.
Read moreCEN and CENELEC have taken exceptional measures to deliver core standards for the EU AI Act faster.
Read moreUntil November 7, 2025, the European Commission is requesting feedback on the draft guidance and reporting template for reporting serious AI incidents.
Read moreThe EU AI Act is getting a Scientific Panel of 60 independent experts who will lay the technical foundation for policy and supervision from 2026.
Read moreDutch DPA tested four AI chatbots as voting guides and found more than half gave incorrect recommendations. What this means for AI in public discourse.
Read moreEuropean Commission and EDPB published joint guidelines clarifying for the first time how the Digital Markets Act and GDPR intersect for major platforms.
Read morePenalty provisions are active since August 2025, but most organizations aren't ready. Here's where enforcement readiness stands and what gaps remain.
Read more2025 marks a pivotal year in AI Governance: from experimental frameworks to operational compliance.
Read moreFrom a ‘voluntary’ GPAI Code to a practical vendor‑assurance process with clear evidence, workable obligations, and a realistic path to EN standards.
Read moreEU published a Code of Practice for general-purpose AI (GPAI). Model clauses that help deployers embed transparency, copyright, and safety in contracts.
Read moreEU AI Act mandates a public summary of training content. The Commission's template shows what providers must disclose—and which risks to avoid.
Read moreAI Liability Directive withdrawn February 2025. What EU organizations must do under PLD and AI Act in the resulting legal vacuum — and what replaces it.
Read moreDPIA or FRIA? One is required under GDPR, the other under the EU AI Act. Learn the 5 key differences, when you need both, and download free templates for each.
Read moreThe European Commission has officially recognized the voluntary code of practice for AI models as a legitimate compliance instrument under the AI...
Read moreFrom midnight confessions to ChatGPT to structural compliance risks: why Europe must act now to give AI conversations the same confidentiality privilege...
Read moreOn July 10, 2025, the European Commission published the definitive General-Purpose AI Code of Practice.
Read moreDutch DPA's 2025 report: emotion recognition AI is built on disputed assumptions and poses serious EU AI Act risks. Key findings explained.
Read moreEU Commission published the definitive Code of Practice for general-purpose AI. New obligations on transparency, safety, and copyright are now in effect.
Read moreWhy can we often not understand AI? This blog dives into the 'black box' of AI and explains why explainability (XAI) is essential for trust, fairness.
Read moreEU AI Act sandboxes give organizations a supervised testing environment for high-risk AI. Learn who qualifies, how to apply, and what benefits await.
Read moreHow do we as humans maintain grip on AI? This blog discusses concrete ways to maintain control, with examples and tips based on the EU AI Act.
Read moreHow do we stay in control of smart technology? This blog explores practical methods to maintain human control in AI use, with concrete examples and...
Read moreA comparative analysis of how we underestimate AI, similar to how we once underestimated electricity, and why we must take action now to embrace this...
Read moreWhen an AI system makes a legal error, who is liable? From algorithmic biases to privacy dilemmas - we dive into the ethical gray areas of AI in legal...
Read moreFrom February 2, 2025, all employees working with AI must be sufficiently AI literate.
Read moreFrom boardroom to workplace: practical guidelines for implementing responsible AI systems within your organization.
Read moreAn analysis of how the EU AI Act will influence different sectors and businesses.
Read moreWho is responsible for what in AI development and usage? Discover the roles and duties of all players in the AI chain.
Read more