Biden Signs Executive Order on Safe, Secure, and Trustworthy AI
Summary
President Joe Biden signed the most comprehensive US government action on AI to date, establishing new safety testing requirements, watermarking standards, and reporting obligations for developers of powerful AI systems. The Executive Order represented the first serious attempt by the US government to regulate frontier AI development.
What Happened
On October 30, 2023, President Biden signed Executive Order 14110 on "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The order was the most wide-ranging US government directive on AI to date, covering safety, security, privacy, equity, civil rights, consumer protection, labor, innovation, and international leadership.
Key provisions included:
- Safety testing requirements: Companies developing foundation models that pose "serious risks to national security, national economic security, or national public health and safety" were required to share safety test results with the federal government before public release. The threshold was set at models trained using more than 10^26 FLOP of compute.
- Watermarking and content authentication: Federal agencies were directed to develop standards for AI-generated content labeling.
- Privacy: Directed support for privacy-preserving techniques and evaluation of existing privacy protections.
- Civil rights and equity: Directed agencies to address algorithmic discrimination.
- Federal workforce: Launched an AI talent surge across the federal government.
- International leadership: Directed engagement with allies on AI governance frameworks.
The order invoked the Defense Production Act to require reporting from companies training large models, a legally significant mechanism that gave the requirements teeth.
Why It Matters
The Executive Order was the US government's most substantive intervention in AI development, and it set the template for how democratic governments could regulate AI without passing legislation — a process that would have taken far longer given Congressional gridlock.
The compute threshold of 10^26 FLOP was particularly significant: it was an attempt to create a bright-line rule for which models warranted government oversight, and it would influence regulatory thinking globally. Critics argued the threshold was too high (missing smaller but potentially dangerous models) or too low (capturing models that posed no real risk), prefiguring an ongoing debate about compute-based regulation.
The order also established the US approach as distinct from the EU's more prescriptive regulatory framework. Where the EU AI Act focused on risk categories and compliance obligations, the Executive Order emphasized reporting, testing, and voluntary commitments — reflecting a lighter regulatory philosophy that would itself become contested.
Notably, many provisions of this Executive Order would later be revoked or modified under the Trump administration in 2025, raising questions about the durability of executive action as a regulatory tool for AI.