Responsible AI Platform
Article 5 of 1134%
🇳🇱 Nederlands

Article 5: Prohibited AI practices

EU Official:
In force since 2 Feb 2025
Title II: Prohibited AI PracticesEntry into force: 2025-02-02

Official text

||

Source: EUR-Lex, Regulation (EU) 2024/1689 — text reproduced verbatim.

📥 Download AI Act (PDF)

📘 Official Guidance

Published 4 Feb 2025

Guidelines on prohibited AI practices

The European Commission provides extensive guidance on the eight categories of prohibited AI practices in Article 5. The guidelines contain concrete examples, edge cases and clarify the relationship between prohibited practices and high-risk classification. Systems not prohibited under Article 5 may still be classified as high-risk under Article 6.

Key points (8)
  • Manipulative AI: prohibited when it significantly influences behaviour or decisions without the person's awareness
  • Exploitation of vulnerabilities: specifically protects children, elderly and persons with disabilities
  • Social scoring: prohibited for governments; private parties may fall under high-risk
  • Biometric categorisation based on sensitive characteristics (race, religion, sexual orientation) is prohibited
  • Real-time biometric identification in public spaces is prohibited except for specific law enforcement
  • Emotion recognition in the workplace and education is prohibited
  • Scraping facial images for databases (like Clearview AI) is prohibited
  • Law enforcement exceptions require prior judicial authorisation and FRIA

🎯 What does this mean for you?

🏭 Provider
As a provider, you may not place AI systems on the market that use manipulative techniques, exploit vulnerabilities, or apply social scoring. Review your AI portfolio for these prohibited applications before 2 February 2025.
🏢 Deployer
As a deployer, you may not use prohibited AI systems. Verify that your suppliers confirm their systems do not fall under prohibited categories. Pay specific attention to emotion recognition in the workplace.
🏪 SME / Startup
Even as a small business, the prohibitions apply in full. Most SMEs will not be affected, but check whether you use AI for customer segmentation or employee assessment — these may fall under the prohibitions.
🏛️ Public Sector
Public authorities may not deploy social scoring systems. Real-time biometric identification in public spaces is prohibited, except in strictly defined law enforcement exceptions.

📖 Related Recitals

🛠 Related tools

⚖️ Related Enforcement

  • AI Act prohibited practices enter into force — no enforcement actions yetEuropean AI Office · Feb 2025
  • CNIL fines Amazon €32 million for AI employee monitoringCNIL · Dec 2023
  • Dutch DPA fines Clearview AI €30.5 millionAutoriteit Persoonsgegevens · Sept 2023
  • Clearview AI fined €20 million by Italian DPAGarante per la protezione dei dati personali · May 2022
  • Greek DPA fines Clearview AI €20 millionHellenic Data Protection Authority · May 2022

📝 Related Blog Posts

Cross-references

Annexes

Frequently asked questions

Which AI practices are prohibited under the AI Act?
Article 5 prohibits manipulative AI techniques, exploitation of vulnerabilities, social scoring by governments, real-time biometric identification in public spaces (with exceptions), and emotion recognition in the workplace and education.
What are the penalties for prohibited AI practices?
Violations of Article 5 can result in fines of up to €35 million or 7% of global annual turnover, whichever is higher.
Since when do the prohibited AI practices apply?
The prohibitions in Article 5 have been in force since 2 February 2025.
Do SMEs also need to comply with Article 5 of the AI Act?
Article 5 of the AI Act does not provide a general exemption for SMEs. However, the AI Act includes supportive measures and potentially lighter obligations for small and medium-sized enterprises, depending on their role in the AI value chain.
How does Article 5 of the AI Act relate to the GDPR?
Article 5 of the AI Act complements the GDPR. While the GDPR protects personal data, the AI Act focuses on the safety and trustworthiness of AI systems. Organisations must comply with both regulations when their AI system processes personal data.
What are the deadlines for Article 5 of the AI Act?
The AI Act follows a phased implementation. Prohibited AI practices apply from February 2025, obligations for high-risk AI systems from August 2026, and other provisions take effect gradually. The specific deadline for Article 5 depends on the category of the obligation.
Does Article 5 of the AI Act also apply to AI systems I purchase?
Yes, Article 5 of the AI Act may also be relevant when you purchase AI systems. As a deployer, you have your own obligations under the AI Act, regardless of whether you developed the system yourself or purchased it from a provider.
What is the difference between provider and deployer under Article 5 of the AI Act?
Under Article 5 of the AI Act, the provider is the entity that develops or places the AI system on the market, while the deployer is the entity that uses the system under its own authority. Both roles carry different obligations.
Which AI applications have been prohibited since February 2025?
Since 2 February 2025, the following are prohibited: manipulative and deceptive AI, exploitation of vulnerable groups, social scoring, predictive policing based on personality traits, untargeted scraping of facial images, emotion recognition in the workplace and education, and biometric categorisation based on sensitive characteristics.
Can I still use emotion recognition in my company?
Emotion recognition in the workplace and education is prohibited under Article 5. Emotion recognition is allowed in other contexts, but is then classified as high-risk AI under Annex III and must comply with strict requirements.
Is social scoring by companies also prohibited or only by governments?
The prohibition on social scoring applies to both public authorities and private organisations. Article 5 prohibits AI systems that assess or classify persons based on social behaviour or personality traits, if this leads to detrimental or unfavourable treatment that is unjustified.
How do I know if my AI system falls under prohibited practices?
Check whether your AI system touches any of the eight prohibited categories in Article 5. Pay specific attention to manipulative techniques, exploitation of vulnerable groups, biometric applications and emotion recognition. When in doubt, a legal review is advisable, as violations carry the highest fine category (up to 7% turnover).
What is the difference between prohibited AI and high-risk AI?
Prohibited AI (Article 5) may not be used in the EU at all — there is no way to be compliant. High-risk AI (Article 6) may be used, but only if strict requirements around risk management, transparency, data governance, human oversight and more are met.

📬 AI Act Weekly

Get the most important AI Act developments in your inbox every week.

Subscribe