EU AI Act Article 5 Prohibitions Take Effect Across EU
Summary
On February 2, 2025, the EU AI Act's Article 5 prohibitions became directly enforceable across all EU member states — the first binding prohibitions on specific AI applications in any major jurisdiction. The rules ban social scoring by public authorities, real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions), emotion recognition in workplaces and educational institutions, and AI systems designed to exploit psychological vulnerabilities. Non-compliance carries fines of up to €35 million or 7% of global annual turnover.
What Happened
The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024, with a staggered implementation timetable. Article 5 prohibitions — the act's hardest rules — became applicable on February 2, 2025, exactly six months after entry into force.
The prohibitions cover: AI systems that deploy subliminal or manipulative techniques to distort behavior; exploitation of vulnerabilities based on age, disability, or social or economic circumstances; biometric categorization to infer sensitive attributes such as race, political opinions, or sexual orientation; social scoring by public authorities or on their behalf; real-time remote biometric identification in publicly accessible spaces for law enforcement (subject to narrow national-security exceptions requiring prior judicial authorization); predictive policing based solely on profiling; and emotion recognition in workplace or educational contexts.
The same February 2 milestone also activated the AI literacy obligations under Article 4, requiring providers and deployers to ensure staff who interact with AI systems have adequate literacy to do so responsibly.
National market surveillance authorities in each member state hold primary enforcement responsibility. The European AI Office, established within the European Commission, oversees General-Purpose AI providers and coordinates enforcement across borders.
Companies already deploying systems in affected categories faced an immediate choice: cease or modify operations, obtain a valid exemption, or risk enforcement action. Legal observers noted enforcement would likely start with high-visibility cases before authorities develop more systematic capacity.
Why It Matters
February 2, 2025 marked the first moment a binding legal prohibition on AI practices applied in any major jurisdiction. This was qualitatively different from the transparency requirements, voluntary commitments, and sector-specific rules that had preceded it. The prohibitions created hard legal walls around specific applications regardless of claimed benefits.
The bans on biometric surveillance and social scoring had the clearest immediate impact for technology companies selling to EU public authorities. They also set a template — once prohibitions pass in one major economy, advocates in other jurisdictions can point to a working precedent.
The high fine ceiling (€35M or 7% turnover) placed EU AI Act penalties alongside GDPR sanctions, signaling the Commission's intent to treat AI regulation as commercially material, not cosmetic. Whether that deterrence effect would hold across 27 national enforcement systems with varying capacity and political will was a separate question.
For the broader governance trajectory, February 2 represented the end of the AI regulation debate's purely theoretical phase. The rules were real, they applied, and the next phase would be determined by how aggressively regulators and courts chose to enforce them.