Responsible AI Platform

IAMA Quick Scan Checklist

Determine if a Human Rights and Algorithms Impact Assessment is needed

Answer the following 10 questions to determine if a full IAMA is needed for your AI system. Count the number of "Yes" answers for your score.

1

Are personal data being processed?

For example: social security numbers, names, addresses, income, health data, or other identifying information about individuals.

Examples: Benefit applications, permit requests, care files

Yes
No
2

Does the system influence decisions about individuals?

The system contributes to decisions that directly or indirectly impact the rights, obligations, or access to services of citizens.

Examples: Approval/rejection of applications, case prioritization, risk assessment

Yes
No
3

Does the system affect vulnerable groups?

Persons who deserve extra protection due to their situation or characteristics.

Examples: Minors, benefit recipients, asylum seekers, people with debts

Yes
No
4

Is there profiling or scoring involved?

The system categorizes, ranks, or assigns scores to individuals or groups.

Examples: Risk scores, fraud scores, priority lists, target group segmentation

Yes
No
5

Does the system combine data from multiple sources?

Data is linked from different registers, systems, or organizations.

Examples: Linking population register + debt register, municipal data + police data

Yes
No
6

Is human intervention limited or absent?

The system operates largely autonomously, or human control is merely formal.

Examples: Automatic decisions, system-generated advice that is routinely adopted

Yes
No
7

Can the system lead to exclusion from services?

The result of the system can prevent citizens from accessing facilities, help, or services.

Examples: Denial of benefits, blocking of services, exclusion from programs

Yes
No
8

Is the system difficult to explain to citizens?

The operation of the system is complex and not easily explained in understandable language.

Examples: Machine learning models, neural networks, complex rule combinations

Yes
No
9

Are there historical biases in the training data?

The data on which the system is based may contain patterns that reproduce discrimination.

Examples: Historical enforcement data, data from periods with unequal policies

Yes
No
10

Does the system fall under Annex III of the AI Act?

The EU AI Act classifies certain applications as "high-risk" in Annex III.

Examples: Access to education, employment, social services, law enforcement, migration

Yes
No

Your Score

Count the number of "Yes" answers:

0-2 times Yes — Low risk

A full IAMA is probably not needed. Consider brief documentation of your considerations.

3-5 times Yes — Moderate risk

Consider conducting an IAMA. Consult your DPO or legal department for advice.

6-10 times Yes — High risk

An IAMA is strongly recommended. Involve relevant stakeholders and experts in the execution.

Disclaimer: This checklist is a tool and does not replace legal advice. When in doubt, consult your Data Protection Officer or legal department.

Based on the IAMA framework of the Dutch Government and the EU AI Act.

Version January 2025