EU AI Act General-Purpose AI Obligations Enter Force
Summary
On August 2, 2025 — exactly one year after the EU AI Act's entry into force — the Chapter V obligations on General-Purpose AI (GPAI) model providers became applicable across the EU. The rules require GPAI providers to maintain technical documentation, disclose copyright-relevant training data summaries, and publish model evaluation results. A GPAI Code of Practice endorsed on July 10, 2025 provided the compliance framework. Enforcement powers for the European AI Office were deferred to August 2, 2026, and fines for GPAI violations are capped at €15 million or 3% of global turnover.
What Happened
GPAI obligations under the EU AI Act differ from the high-risk AI rules in being framed around the capabilities of the model rather than the use case. Any provider making a general-purpose AI model available in the EU with training compute above a specific threshold (10^25 FLOPs for systemic-risk designation, lower thresholds for standard documentation requirements) falls within scope.
The August 2 activation covered: technical documentation requirements specifying model architecture, training methodology, and evaluation results; training data transparency, including a summary of datasets used that is sufficient to assess copyright compliance; instructions for downstream users explaining capabilities and limitations; and incident reporting obligations for systemic-risk GPAI models.
The GPAI Code of Practice — a voluntary compliance document developed through a multi-stakeholder process including frontier developers, civil society, and member state experts — was finalized on July 10, 2025, three weeks before the obligations became applicable. The Commission treated adherence to the Code as creating a rebuttable presumption of compliance. Companies that participated in the Code drafting process — including all major frontier labs with EU operations — generally indicated intent to comply.
Enforcement authority rests with the European AI Office, which holds sole jurisdiction over GPAI providers in cross-border cases. National authorities handle domestically deployed systems. The one-year enforcement deferral to August 2026 was designed to allow companies time to implement compliance programs and for the AI Office to build enforcement capacity.
Why It Matters
The GPAI obligations represented the EU AI Act's most direct reach into frontier AI development. The February 2025 prohibitions had addressed applications; the August 2025 rules addressed the models themselves. This was the first time a major jurisdiction had imposed documentation and transparency requirements on the development process for large-scale AI models, not merely their deployment.
The training data transparency requirement carried particular weight because it intersected with ongoing copyright litigation. GPAI providers operating in the EU were now required to summarize their training data in sufficient detail to support copyright compliance assessments — a disclosure that could provide evidence in civil copyright suits and might conflict with existing confidentiality practices.
The Code of Practice mechanism illustrated a key dynamic in the EU's regulatory approach: the Commission used the threat of direct enforcement to incentivize industry participation in developing compliance standards. The result was rules that industry had effectively co-authored, which reduced compliance friction but also raised questions about capture.
The enforcement deferral to 2026 meant the practical impact of the GPAI rules would be felt with a lag. But the disclosure and documentation requirements created immediate compliance costs, and the ongoing uncertainty about how the European AI Office would interpret edge cases shaped commercial decisions about which models to make available in the EU.