The AI Act Has Really Started – What Has Happened Since?
Published: June 14, 2025 – reading time ± 10 minutes
When the EU AI Act appeared in the Official Journal on August 1, 2024, it still felt abstract. Eleven months later, that's over: the first prohibitions already apply, employees must demonstrably be AI-literate, and consultations are flying around Brussels. In this blog, I'll take you through the most important developments since the entry into force – practical, legal, and political. No repetition of the legal text; rather a look at what happened after August 1 and why you need to know this today.
1. February 2025: The First Hard Hit
February 2, 2025 was the date when the AI Act showed its teeth. Two provisions came into immediate effect:
- Prohibited AI practices – everything covered under Article 5 (social credit systems, manipulative or exploitative AI, emotion AI in schools and workplaces, large-scale real-time biometrics) had to be removed from the market or shut down immediately.
- AI literacy obligation – every organization that builds or uses AI must now be able to demonstrate that personnel have "sufficient AI knowledge." The Dutch Data Protection Authority (AP) published a special page in early March with explanations, checklists, and training suggestions.
Those who thought the fines (max. 7% of global turnover) would only become relevant in 2026 were mistaken: the sanction articles take effect this year (August 2, 2025). So listen carefully, especially if there's still some selenium-like scoring algorithm running in the basement somewhere.
2. Brussels Explains What "Prohibited" Exactly Means
The timing was tight: February 4, 2025 – two days after the prohibition articles came into effect – the European Commission published a draft guideline for prohibited AI practices. The document is full of practical examples ("this is allowed" vs. "this is not allowed") and nuances emotion recognition, for example: merely analyzing facial expressions in the workplace quickly falls under the prohibition, but measuring customer satisfaction with surveys does not.
For now, it's a concept; stakeholders had six weeks to provide feedback. Expect a final version in Q3 that supervisors will use as a framework. Since guidelines are non-binding, we'll only get 100% certainty when the Court of Justice eventually rules on them, but they do provide much-needed direction now.
3. Generative AI Under the Microscope
Large language models and other General-Purpose AI models (GPAI) receive independent obligations from August 2, 2025. To prevent misunderstandings, the newly established European AI Office launched a targeted consultation on April 22, 2025:
- What exactly is a GPAI model?
- When are you a "provider" (including fine-tuning or downstream deployment)?
- How do you publish a "summary of training data" without leaking trade secrets?
Over 250 parties – from open-source collectives to Big Tech – responded. In parallel, the AI Office is working on a Code of Practice that providers can voluntarily follow to demonstrate compliance. The schedule: process consultations in summer, final GPAI guidelines and code in autumn. Anyone rolling out a model like GPT-5, Llama 4, or an industrial model must therefore now already think about documentation, copyright claim handling, and risk assessments.
4. Netherlands: Supervisors Warming Up
4.1 AP + RDI: Proposal for "Hub-and-Spoke" Supervision
In June 2024, the AP and Dutch Inspectorate of Digital Infrastructure (RDI) came up with a position paper: let the AP act as central market supervisor for prohibited AI and most high-risk systems, while sectoral supervisors (NVWA, IGJ, ILT) keep their existing product domain.
The final advice (February 2025) repeated that model and asked for additional budget and AI experts. The cabinet must formally make decisions before August 2, 2025.
4.2 Consultations on Prohibited AI
Meanwhile on the AP website: a series of "Input on forbidden AI systems". Organizations could share case studies and concerns about, for example, social credit algorithms and exploitation of vulnerable groups. Goal: gain insight into practice so that the AP can enforce in a targeted manner from day one.
4.3 AI Literacy as Compliance Engine
The AI knowledge obligation is alive – especially at municipalities and healthcare institutions. The AP bundled FAQs, example training modules, and a self-assessment. Tip for companies: share training documentation with your data protection officer, then you immediately have evidence for the inspector.
5. The New EU Governance Structure
Since autumn, three layers have been positioning themselves:
- European AI Office – spider in the web, draws up guidelines and GPAI supervision.
- European AI Board – platform of national authorities for consistent enforcement.
- National market supervisors – must be officially designated by August 2, 2025 at the latest. The Netherlands is ahead, but in some member states the discussion has only just begun.
The European Data Protection Board (EDPB) called on all member states to give their privacy authorities a prominent AI supervisory role to logically organize overlap with GDPR enforcement. So expect one desk for citizen complaints per country, and even more Brussels meeting tables where AI inspectors meet each other.
6. Feasibility Discussion: Pause or Continue?
Not everyone is comfortable with the pace. In May and June, signals emerged that the European Commission is considering a "stop-the-clock": postponing certain deadlines (particularly the GPAI obligations) until technical standards are finished and enough testing institutions are operational.
Especially Poland – Council President from July – openly advocates for postponement. SME umbrella organizations endorse this: without standards, suppliers don't know how to prove their risk management system exactly. At the same time, NGOs warn that every month of delay postpones citizen protection. It promises to be a hot agenda item at the July 2025 Telecom Council.
7. Views from Outside Europe
- US – Federal law is lacking, but states like California are watching. American Big Tech increasingly builds "EU-by-design" to avoid double work.
- UK – Sticks to principle-driven, sector-specific supervision; uses the AI Safety Summit to talk about frontier AI boundary risks.
- G7/Council of Europe – The Hiroshima declaration and the new Council of Europe treaty follow the same value line; the AI Act serves as a template.
For European companies, this means the Brussels Effect strikes again: products that are compliant in the EU are often also acceptable elsewhere – but the reverse does not apply.
8. What Organizations Must Do Now
- Clean up your AI portfolio. Scan all applications: prohibited? high risk? limited?
- Document AI literacy. Plan training, record attendance and test results.
- Follow the consultations. High-Risk AI deadline (July 18, 2025) and final GPAI code (autumn) determine your roadmap.
- Check contracts. Suppliers delivering GPAI models after August 2, 2025, must comply with new disclosure obligations – put that in the SLA.
- Reserve budget. Conformity assessment is not an Excel task; count on external audits and (for high-risk) CE-like certification marks.
Conclusion
The EU AI Act is no longer a futuristic vision; it makes a daily difference in the workplace, in the development studio, and in the boardroom. Between now and August 2, 2025, the law gets a second wave: GPAI supervision, sanctions regime, and official appointment of national inspections. Whether that date remains depends on the stop-the-clock discussion – but waiting is not a wise strategy.
Organizations that already did their homework in recent months discover that compliance is not just a cost item. It delivers sharper governance, better datasets, and especially the confidence that your AI applications can withstand European scrutiny. And that becomes, pause or not, the new normal.
Want to Know More About AI Compliance?
Curious how your organization can prepare for the latest developments of the EU AI Act? At Embed AI, we help organizations with practical compliance strategies and AI literacy plans. Let us know, and we'll be happy to think along with you!
Sources: European Commission – Guidelines prohibited AI (Feb 4, 2025); GPAI Consultation (Apr 22, 2025); AP – AI literacy (Mar 2025); AP – Input prohibited AI (Apr 2025); DLA Piper – possible AI Act pause (June 2025); LinkedIn/MLex leak about "stop-the-clock" (June 2025).