Source: This article is based on session 6 "Building on AI Literacy" from the AI Supervision Congress 2025, presented by the Directorate for the Coordination of Algorithms of the Dutch Data Protection Authority.
Why theory alone isn't enough
Since February 2, 2025, AI literacy has been a legal requirement under Article 4 of the AI Act. But what does that mean in daily practice? During the AI Supervision Congress, the Dutch Data Protection Authority (AP) shared two sharp cases that demonstrate why AI literacy isn't an abstract concept, but prevents concrete risks.
Case 1: The lawyer who trusted ChatGPT
The situation
Lawyer Sonja has been using ChatGPT for months for her work. It helps her with contract clauses, summaries, and legal research. She knows she should be critical, but it's actually always correct. Her firm expects her to work 30% more efficiently thanks to AI.
When she needs last-minute supporting documentation for a plea, she asks ChatGPT for case law.
The next day in court, the case law turns out not to exist.
What went wrong?
- No understanding of hallucinations โ Sonja didn't know LLMs can generate convincing but fictional information
- Overconfidence from successes โ Earlier good experiences created false trust
- Time pressure โ No room for verification
- No escalation protocol โ No colleague check for AI-generated content
What could AI literacy have meant?
- Knowledge of model limitations โ Knowing that LLMs aren't reliable sources for factual claims
- Verification protocol โ Always checking primary sources for legal documentation
- Culture of critical use โ Normalizing that AI output is never blindly accepted
Case 2: The insurer who left the details to IT
The situation
The board of an insurer decides to deploy an AI chatbot for customer questions about policy conditions. They see the business case: lower costs, 24/7 availability, scalability. They leave the technical details to the IT department.
When the chatbot provides incorrect information about insurance coverage โ resulting in financial damage โ the board turns out to be unaware of the possible scenarios.
What went wrong?
- Board ignorance โ Management didn't understand the technology's risks
- No risk ownership โ IT got responsibility but not the authority to say no
- Missing monitoring โ Nobody checked if the chatbot gave correct information
- No escalation path โ Complaints didn't reach the board
What could AI literacy have meant?
- Board involvement โ Management that understands what can go wrong with AI decisions
- Due diligence in procurement โ Asking critical questions to vendors about accuracy and liability
- Monitoring and feedback loops โ Structurally checking if AI does what it should
The core message from the Dutch DPA
The AP emphasized during the congress:
"AI literacy is a core prerequisite for responsible AI and algorithms. It enables organizations to optimally leverage the opportunities of innovative technology AND properly assess a system's impact."
Not just for techies: AI literacy is about being able to make informed decisions. That applies to everyone: from board member to end user.
How other organizations are tackling it
During the congress, two concrete implementation examples were shared:
Example 1: Machine building company (ยฑ240 employees)
| Aspect | Approach |
|---|---|
| Responsibility | Policy working group with mandatory meetings |
| Training | AI knowledge session for 20% of company (managers + delegates) |
| Practice | Workshop for 2 promising AI applications + MS Copilot pilot |
| Policy | Company policy on intranet with short e-learning |
| Inventory | Managers provided input for 50+ possible AI applications |
Example 2: Government agency
| Aspect | Approach |
|---|---|
| Responsibility | Appointed AI officer as point of contact for entire organization |
| Training | AI training for broad group + online workshop on generative AI (do's and don'ts) |
| Policy | Guidelines in development for generative AI |
| Framework | Data lab with AI applications and frameworks, guides employees in pilots |
Role-specific approach: Who needs to know what?
The AP presented a differentiation model for different actors within organizations during the congress:
General (all employees)
Goal: Increase awareness and promote basic AI knowledge
Management
Goal: Insight into operation, risks, and impact for informed decision-making
General users
Goal: Critically apply and evaluate AI output
Data analysts
Goal: Deep understanding of working with data and AI models
Engineers
Goal: Design, develop, and optimize AI solutions
Legal & Compliance
Goal: Be aware of AI operation to advise effectively and responsibly
Update: Digital Omnibus may weaken requirement
Note: The European Commission has proposed (Digital Omnibus) that could weaken the AI literacy requirement for non-high-risk AI. The European Parliament and Council still need to approve this.
Current status: For high-risk AI, obligations remain fully in force.
This doesn't mean you should wait. The cases above show that AI literacy is essential for risk management even without legal requirements.
Congress conclusions
The AP closed the session with five observations from practice:
- Many organizations have AI literacy on their radar and are taking initiatives
- Ad-hoc measures aren't sufficient โ structural embedding is essential
- Motivating employees remains a challenge
- Continuous effort needed due to rapid AI development
- The AP remains active on this topic and will monitor organizations
Immediately applicable: 3 actions for this week
Share the cases
Discuss the lawyer and insurer cases in your next team meeting. Ask: "Could this happen to us?"
Identify your AI officer
Appoint someone responsible for AI literacy โ even if part-time.
Start with 1 pilot
Choose one department or tool and begin with a structured approach there.
Further reading
- AI Literacy is Now Enforceable Policy โ The AP guidance explained
- AI Supervision Congress 2025: All Insights โ Complete congress overview
- Building AI Literacy in Your Organization โ The four-step model
Sources
๐ฏ Need training? Schedule a call to discuss how we can help your team with practical AI literacy.