Responsible AI Platform

AI Literacy in Practice: Lessons from the AI Supervision Congress

ยทยท8 min read
Delen:
Dutch version not available

Source: This article is based on session 6 "Building on AI Literacy" from the AI Supervision Congress 2025, presented by the Directorate for the Coordination of Algorithms of the Dutch Data Protection Authority.

Why theory alone isn't enough

Since February 2, 2025, AI literacy has been a legal requirement under Article 4 of the AI Act. But what does that mean in daily practice? During the AI Supervision Congress, the Dutch Data Protection Authority (AP) shared two sharp cases that demonstrate why AI literacy isn't an abstract concept, but prevents concrete risks.


Case 1: The lawyer who trusted ChatGPT

The situation

Lawyer Sonja has been using ChatGPT for months for her work. It helps her with contract clauses, summaries, and legal research. She knows she should be critical, but it's actually always correct. Her firm expects her to work 30% more efficiently thanks to AI.

When she needs last-minute supporting documentation for a plea, she asks ChatGPT for case law.

The next day in court, the case law turns out not to exist.

What went wrong?

  1. No understanding of hallucinations โ€“ Sonja didn't know LLMs can generate convincing but fictional information
  2. Overconfidence from successes โ€“ Earlier good experiences created false trust
  3. Time pressure โ€“ No room for verification
  4. No escalation protocol โ€“ No colleague check for AI-generated content

What could AI literacy have meant?

  • Knowledge of model limitations โ€“ Knowing that LLMs aren't reliable sources for factual claims
  • Verification protocol โ€“ Always checking primary sources for legal documentation
  • Culture of critical use โ€“ Normalizing that AI output is never blindly accepted

Case 2: The insurer who left the details to IT

The situation

The board of an insurer decides to deploy an AI chatbot for customer questions about policy conditions. They see the business case: lower costs, 24/7 availability, scalability. They leave the technical details to the IT department.

When the chatbot provides incorrect information about insurance coverage โ€” resulting in financial damage โ€” the board turns out to be unaware of the possible scenarios.

What went wrong?

  1. Board ignorance โ€“ Management didn't understand the technology's risks
  2. No risk ownership โ€“ IT got responsibility but not the authority to say no
  3. Missing monitoring โ€“ Nobody checked if the chatbot gave correct information
  4. No escalation path โ€“ Complaints didn't reach the board

What could AI literacy have meant?

  • Board involvement โ€“ Management that understands what can go wrong with AI decisions
  • Due diligence in procurement โ€“ Asking critical questions to vendors about accuracy and liability
  • Monitoring and feedback loops โ€“ Structurally checking if AI does what it should

The core message from the Dutch DPA

The AP emphasized during the congress:

"AI literacy is a core prerequisite for responsible AI and algorithms. It enables organizations to optimally leverage the opportunities of innovative technology AND properly assess a system's impact."

Not just for techies: AI literacy is about being able to make informed decisions. That applies to everyone: from board member to end user.


How other organizations are tackling it

During the congress, two concrete implementation examples were shared:

Example 1: Machine building company (ยฑ240 employees)

AspectApproach
ResponsibilityPolicy working group with mandatory meetings
TrainingAI knowledge session for 20% of company (managers + delegates)
PracticeWorkshop for 2 promising AI applications + MS Copilot pilot
PolicyCompany policy on intranet with short e-learning
InventoryManagers provided input for 50+ possible AI applications

Example 2: Government agency

AspectApproach
ResponsibilityAppointed AI officer as point of contact for entire organization
TrainingAI training for broad group + online workshop on generative AI (do's and don'ts)
PolicyGuidelines in development for generative AI
FrameworkData lab with AI applications and frameworks, guides employees in pilots

Role-specific approach: Who needs to know what?

The AP presented a differentiation model for different actors within organizations during the congress:

1

General (all employees)

Goal: Increase awareness and promote basic AI knowledge

2

Management

Goal: Insight into operation, risks, and impact for informed decision-making

3

General users

Goal: Critically apply and evaluate AI output

4

Data analysts

Goal: Deep understanding of working with data and AI models

5

Engineers

Goal: Design, develop, and optimize AI solutions

6

Legal & Compliance

Goal: Be aware of AI operation to advise effectively and responsibly


Update: Digital Omnibus may weaken requirement

Note: The European Commission has proposed (Digital Omnibus) that could weaken the AI literacy requirement for non-high-risk AI. The European Parliament and Council still need to approve this.

Current status: For high-risk AI, obligations remain fully in force.

This doesn't mean you should wait. The cases above show that AI literacy is essential for risk management even without legal requirements.


Congress conclusions

The AP closed the session with five observations from practice:

  1. Many organizations have AI literacy on their radar and are taking initiatives
  2. Ad-hoc measures aren't sufficient โ€“ structural embedding is essential
  3. Motivating employees remains a challenge
  4. Continuous effort needed due to rapid AI development
  5. The AP remains active on this topic and will monitor organizations

Immediately applicable: 3 actions for this week

1

Share the cases

Discuss the lawyer and insurer cases in your next team meeting. Ask: "Could this happen to us?"

2

Identify your AI officer

Appoint someone responsible for AI literacy โ€“ even if part-time.

3

Start with 1 pilot

Choose one department or tool and begin with a structured approach there.


Further reading


Sources

Dutch Data Protection Authority: Session 6: Building on AI Literacy (December 2025)
Dutch Data Protection Authority: Building on AI Literacy (guidance) (2025)

๐ŸŽฏ Need training? Schedule a call to discuss how we can help your team with practical AI literacy.