Timeline that matters: Since February 2025, organizations are required to organize AI literacy under Article 4 of the AI Regulation. Supervision by national authorities starts in August 2026. Those who build smartly now prevent remedial work later and can innovate safely in the meantime. The Dutch DPA provides a concrete blueprint with the new guidance "Verder bouwen aan AI-geletterdheid" (available in Dutch).
What's exactly new?
The Dutch Data Protection Authority (Autoriteit Persoonsgegevens) published a follow-up guidance on AI literacy this week. This is not a standalone campaign or voluntary recommendation, but a deepening of "Getting started with AI literacy" that translates the legal obligation into a multi-year, iterative action plan: identify, set goals, implement, and evaluate.
The document is packed with insights from the call for input and DPA meetings, showing that many organizations have taken the first steps but struggle with embedding, steering, and measuring. The central idea: AI literacy is not a one-time training, but an organizational capability that you make visible, master, and periodically improve.
What does the law require exactly?
Article 4 of the AI Regulation requires providers and deployers of AI systems to ensure an adequate level of AI literacy among their staff and other persons working with AI systems on their behalf. The European Commission clarifies this is risk-based: the organization's role (provider or deployer), the context of deployment, and the systems' risks determine the depth.
Note two time dimensions:
- February 2025: the obligation has been in effect since this date
- August 2026: national supervisory authorities start enforcement
Evidence consists of internal documentation and does not require certification. The European Commission clarifies this in their Q&A: "Providers and deployers are not required to obtain a certificate for AI literacy. It is sufficient to maintain internal documentation demonstrating the measures taken."
The core of AI literacy, according to the DPA
The DPA describes AI literacy as an ongoing effort that takes context and roles into account. It goes beyond "knowing what a model is." Employees and other stakeholders must:
- Be able to recognize risks
- Understand the impact on people
- Know how to work responsibly with AI within their own processes
The obligation explicitly extends to persons deploying AI on behalf of the organization, such as suppliers or service providers. All of this requires a structural approach, not isolated workshops.
The multi-year action plan in four steps
1) Identify: get a sharp picture of what you have and who works with it
Map AI systems, including purpose, degree of autonomy, and potential consequences for fundamental rights, safety, and health. Directly link the involved roles: who uses, manages, develops, or decides with AI input?
Also document the current knowledge and skill level. Without this baseline assessment, any program remains generic and non-demonstrable.
Practical example: A legal department using AI for contract review creates an overview of:
- Which AI tools are used (e.g., document analysis tools, generative AI for research)
- What the purpose is (accelerate due diligence, search case law)
- Who works with it (junior lawyers, senior partners, paralegals)
- What data goes in (contracts, confidential documents)
- What the main risks are (hallucinating output, confidentiality, source reliability)
2) Set goals: prioritize based on risk and role
Establish concrete, measurable goals based on risk level and job profiles. A team managing a high-risk application needs different depth than a marketing team testing generative tools.
Think multidisciplinary: technology, people, law, and organizational culture. Assign responsibilities and make explicit who is accountable for what.
Risk-based goals in practice
IT administrators of AI models: In-depth knowledge of monitoring, interpretation of results, escalation paths, and security hygiene. Goal: "All administrators complete a module on model behavior and impact on end users within the second quarter of 2025."
Marketing team with generative AI: Awareness of model limitations, source verification, and transparency. Goal: "All marketing staff know how to validate AI-generated content and when human review is mandatory."
HR in recruitment with AI tools: GDPR compliance, bias awareness, transparency to candidates. Goal: "HR team completes DPIA training and can explain when AI use must be disclosed to candidates."
3) Implement: from PowerPoint to behavior
Embed AI literacy in governance and don't steer only bottom-up. Executives must put the topic on the agenda, allocate budget, and have sufficient knowledge themselves to provide direction.
Combine training and awareness with:
- Transparency about where and how AI is used
- Culture/vision document ("How do we approach AI?")
- Internal file of your approach and progress
The DPA emphasizes this is not just about knowledge transfer, but about behavioral change and awareness that becomes visible in daily work practices.
4) Evaluate: measure, learn, adjust
Set up monitoring to see if goals are met, analyze residual risk, and include AI literacy in management reports. As AI use grows, your program's maturity must evolve with it.
Evaluation is not a final test, but routine.
Who does what? A practical role division
A working program stands or falls with ownership. The DPA advises embedding AI literacy at board level and appointing a clear responsible person. In practice, the following often works:
| Role | Responsibility |
|---|---|
| Executive board | Sets direction, guards resources, puts AI literacy on board meeting agenda |
| AI governance group | Translates direction into roles and rituals (legal, security, privacy, data, HR/L&D, business) |
| Team leads | Make it concrete in processes and on-the-job learning |
| HR/L&D | Keeps learning paths current, measures participation and effect |
This prevents knowledge from remaining siloed in a project team and links it to decision-making and risk management.
Examples that work in practice
Legal department
Start with an overview of AI touchpoints: contract review with generative AI, due diligence, research. Document for each process which AI is used, what the purpose is, and what the main risks are.
Concrete goals:
- All lawyers complete a module on reliable source verification and model limitations
- Bi-weekly harvest sessions with lessons learned
- Decision memos state whether AI was used and how it was validated
This makes choices explainable and the approach auditable.
IT and data
For administrators of models or integrations, topics like monitoring, interpretation of results, escalation paths, and security hygiene belong in the curriculum. Trainers explain the link between model behavior and impact on end users.
Governance here requires clear role delineation: who assesses changes in model versions, who can intervene, who documents?
Education and service organizations
Teams deploying generative chatbots or learning platforms need a blend of pedagogy, bias awareness, and transparency to students or clients:
- When are you talking to AI?
- What limitations apply?
- How do you report errors?
Organizations mention in the DPA input that they seek more guidance while simultaneously fearing loss of control. A cross-functional AI literacy working group helps share experiences and capture patterns.
Measuring without dashboard overload
The Commission indicates you don't need a certificate; internal documentation suffices. Think of:
- Register of AI systems with risk profile
- Role-based learning paths with clear goals per function
- Attendance and assessment records (without bureaucracy)
- Management updates per quarter
Keep it small and meaningful: rather measure whether behavior changes (e.g., the number of peer reviews with AI mention) than just participation percentages. Show that you periodically adjust goals based on incidents, audits, and feedback.
Practical measurable indicators:
- Percentage of employees who completed basic AI awareness training
- Number of AI systems in register with complete risk profile
- Percentage of decision memos where AI use is documented
- Number of reported AI-related incidents or near-misses
- Results of periodic knowledge assessments per risk group
Common pitfalls and how to avoid them
Only counting tools, not context
A list of AI systems without description of purpose, autonomy, data flows, and involved roles is insufficient. Start with the work and decisions made with AI and link the learning goal to that.
Training as isolated event
One e-learning doesn't change behavior. Combine micro-learning with practical assignments, peer sessions, and decision-making rules. Document how teams handle uncertainties in output and when human intervention is needed.
No board-level embedding
If management doesn't visibly participate, attention evaporates. Plan a quarterly rhythm where the board discusses status, records choices, and resolves obstacles.
Forgetting external links
The obligation also applies to persons acting on behalf of your organization. Include suppliers, contractors, and partners in your plan, with clear onboarding and agreements.
The DPA notes: "AI literacy is not limited to own employees. Third parties working with AI systems on behalf of the organization also fall under the obligation."
The DPA will actively monitor this topic
The DPA positions AI literacy as a focus area under its coordinating role for algorithms and AI. Expect deepening activities and monitoring of organizational status, plus follow-up meetings where you can benchmark with peers.
This is valuable for anyone wanting to increase internal support and benchmark their own approach.
A compact roadmap for the next 90 days
Week 1–2: Inventory
Create a current list of AI applications, purposes, degree of autonomy, and primary risks. Link a role matrix to it and determine desired knowledge level per role. Use existing tools like your IT asset register as starting point.
Week 3–4: Goals
Formulate 3–5 measurable goals per risk domain. Document who owns it, how you measure progress, and how escalation works. Present this to the board for commitment and budget.
Month 2: Implementation
Start role-specific learning interventions. Publish your AI use register internally. Write a brief culture/vision piece ("How do we work with AI?"). Set up a status log.
Month 3: Evaluate and adjust
Discuss results in management team, analyze residual risk, adjust goals, and plan the next quarter. Include insights in your management reporting.
Quick wins with immediate impact:
- Update your data handling policy to explicitly mention AI tools
- Create an FAQ document with concrete examples of permitted and prohibited use
- Establish an "AI helpdesk" where employees can quickly check if something is allowed
- Add AI use to your onboarding program for new employees
Why this topic fits perfectly for your organization
AI literacy is not a training track, but an organizational competency. It makes innovation safer, accelerates adoption, and ensures choices are explainable to the board, supervisors, and society.
With the DPA guidance and the European Commission's Q&A, there's now a clear framework to demonstrably arrange this, without unnecessary overhead. Start small, make it visible, and build on what works.
Three principles for success
1. Pragmatism over perfection: Start with AI systems that pose the most risk or are most used. You don't need everything in order at once.
2. Behavior over paper: An extensive policy document nobody reads is less effective than a short, practical document actually used in daily practice.
3. Enabler over blocker: Position AI literacy as something that helps people work better and safer, not as extra bureaucracy working against them.
The link to broader governance
AI literacy doesn't stand alone. It's a building block in your broader AI governance that also includes:
- AI risk management (FRIAs, DPIAs)
- Technical documentation of AI systems
- Transparency to users and stakeholders
- Incident management and escalation procedures
Organizations that tackle this comprehensively see that investments in AI literacy directly contribute to compliance with the entire AI Regulation. Employees who understand why certain rules exist also apply them better.
Conclusion: from obligation to organizational capability
The DPA guidance provides a workable framework to organize AI literacy as an ongoing program rather than a one-time action. The core is simple but powerful: identify systematically, set tailored goals, implement with board support, and evaluate continuously.
Three action items for next week:
- Download the DPA guidance and schedule a session with your AI governance team to review the four steps
- Inventory your current AI systems and who works with them (even a simple spreadsheet is a good start)
- Determine one concrete pilot for a specific department or risk group to start with
Organizations that start now build an advantage. Not just because they're compliant earlier, but especially because they develop a culture where AI is deployed responsibly and effectively. That's not a cost center, but an investment in future resilience.
Sources
- Dutch Data Protection Authority (Autoriteit Persoonsgegevens), Verder bouwen aan AI-geletterdheid (October 2025, in Dutch)
- Dutch Data Protection Authority, Aan de slag met AI-geletterdheid (earlier guidance, in Dutch)
- European Commission, AI Literacy – Questions & Answers (continuously updated)
- AI Regulation, Article 4: Obligations regarding AI literacy