1. AI use and role matrix
Connect AI systems, purposes, teams, external parties and risk profiles. This prevents generic training without a link to real work.
A practical evidence pack for organisations that want to train AI literacy and make it demonstrable per role, system and risk context.
Article 4 does not ask for one standard course for everyone. The core requirement is that providers and deployers take measures, to their best extent, to ensure a sufficient level of AI literacy. That level depends on technical knowledge, experience, education, the context in which AI is used and the people affected by the system.
Which AI systems or AI tools are used and by whom.
Which roles work with those systems or act on behalf of the organisation.
Which risks, limitations and usage rules matter per role.
Which training, guidance, assessment and follow-up is in place.
How progress, exceptions and updates are tracked at management level.
A supervisor is unlikely to ask only whether someone completed a course. The better question is whether you can explain why your measures fit your organisation's AI use.
Connect AI systems, purposes, teams, external parties and risk profiles. This prevents generic training without a link to real work.
Document what management, users, compliance, HR, IT and suppliers need to understand. A recruiter, lawyer and data engineer need different learning goals.
Keep modules, practical exercises, policies, prompts, work instructions and decision rules. Make responsible AI use visible.
Track attendance, assessment results, certificates, exceptions and remediation. A certificate helps, but the context makes the evidence strong.
Report progress, incidents, open risks and updates each quarter. AI literacy is an ongoing governance process.
No. The European Commission says there is no certificate requirement. An online certificate is still useful supporting evidence when it sits inside a broader file with role-based learning goals, records and evaluation.
Awareness is a useful start: people understand that AI has opportunities and risks. AI literacy goes further and must connect to the task, the system and the impact on affected people.
| Area | AI awareness | AI literacy |
|---|---|---|
| Goal | Awareness | Responsible action in context |
| Evidence | Attendance or e-learning | Role matrix, learning goals, assessment and follow-up |
| Depth | Basic concepts | Risks, limitations, governance and practical decisions |
Inventory AI tools, processes, teams and external parties. Start with the highest impact or broadest use.
Define a minimum competency per role: what should someone understand, assess and document?
Start role-based modules, short practical cases and assessments. Record outcomes immediately.
Create a management update with attendance, scores, open gaps, exceptions and the next quarterly plan.
Use LearnWize when you want to execute this evidence approach at team level: assessment, role-based modules, certificates, training records and team reporting in one place.
Everyone working with AI systems or acting on behalf of the organisation needs a suitable level. That does not mean everyone needs the same depth.
Think of an AI inventory, role matrix, learning goals, training records, assessment results, policies, management reporting and evidence of periodic evaluation.
LearnWize helps with the evidence: learning paths, assessments, certificates and reporting. The organisation remains responsible for scope, governance and application in its own context.