White House Releases National AI Legislative Framework for Congress
Summary
On March 20, 2026, the White House released a National AI Legislative Framework — a 7-pillar blueprint of recommended legislation for Congress to consider. The framework explicitly opposes the creation of a new federal AI regulatory agency, urges statutory preemption of state AI laws, endorses regulatory sandboxes for AI development, and calls for streamlined liability protections for AI developers acting in good faith. It does not propose binding risk assessments, mandatory safety reporting, or minimum performance standards for AI systems.
What Happened
The legislative framework was developed by OSTP in consultation with the National Economic Council, the National Security Council, and the Domestic Policy Council. It was released on March 20, 2026 — approximately fourteen months into the Trump administration and approximately fifteen months before the scheduled expiration of the first term.
The seven pillars: (1) statutory preemption of state AI laws to establish a uniform national market; (2) regulatory sandbox authority for federal agencies to permit AI deployments outside existing regulatory frameworks on a pilot basis; (3) liability safe harbors for AI developers and deployers who adhere to federal standards; (4) procurement reform streamlining federal AI acquisition and reducing security review burdens; (5) export framework protecting advanced AI technology while enabling sales to allies; (6) workforce and education investment to maintain AI talent pipelines; and (7) research and development investment in compute, data, and model evaluation infrastructure.
The framework was explicitly framed as legislative recommendations rather than a bill — the administration acknowledged it lacked the votes for comprehensive legislation and was using the document to signal priorities, influence committee agendas, and build a record for the 2026 midterm campaign.
The statutory preemption pillar was the most legally significant: unlike the December 2025 EO's preemption approach (which relied on executive authority and was constitutionally contested), statutory preemption enacted by Congress would be legally durable and broadly effective. The framework proposed a complete-field preemption clause covering all AI-specific state laws, with carve-outs limited to child protection and state government procurement.
The absence of any mandatory safety requirements — not even voluntary reporting frameworks or minimum transparency standards — was notable. The framework's safe harbor provisions were premised on adherence to CAISI-developed voluntary standards, but CAISI itself had moved away from safety evaluation and toward commercial testing facilitation.
Why It Matters
The legislative framework represented the most complete articulation of the Trump administration's AI governance vision at the statutory level. Taken alongside the January 2025 Biden EO revocation, the April 2025 OMB memos, the July 2025 AI Action Plan, and the December 2025 preemption EO, it completed a coherent if contested regulatory philosophy: federal acceleration, no new regulators, preemption of state rules, liability protection for industry, international competition framing.
Whether Congress would act on the framework was uncertain. Republicans in the Senate and House were broadly supportive of the deregulatory direction, but state preemption was more contentious — several Republican senators represented states with their own AI governance initiatives. Democratic opposition was unified against both preemption and the absence of safety requirements.
The framework also arrived at a moment when the EU and Council of Europe were moving in the opposite direction — activating binding obligations, finalizing codes of practice, and in the EU's case debating how to maintain enforcement ambition despite competitive pressure. The two largest economies in the transatlantic alliance were now governed by AI frameworks that differed not just in degree but in kind: one premised on binding rules and enforcement, the other on voluntary standards and liability protection.
For the global governance question, the framework confirmed that a binding multilateral AI treaty with US participation was not achievable while this administration held power. The question for governance advocates was whether the EU-led regulatory model, the Council of Europe human rights treaty, and national-level rules in other jurisdictions could create sufficient gravitational pull to establish de facto global norms even in the absence of US participation.