California Signs SB 53 — Transparency in Frontier AI Act
Summary
On September 29, 2025, California Governor Gavin Newsom signed SB 53, the Transparency in Frontier AI Act — the first US state law specifically targeting frontier AI systems. The law requires covered companies with over $500 million in annual revenue to publish safety frameworks, conduct catastrophic risk assessments, report incidents to state regulators, and protect employees who report safety concerns. Fines can reach $1 million per violation. The law took effect January 1, 2026.
What Happened
SB 53 was introduced by Senator Scott Wiener as a scaled-back successor to SB 1047, which Newsom had vetoed in 2024 after intense industry lobbying. Where SB 1047 had imposed liability standards that developers found unworkable, SB 53 focused on transparency and process requirements — what companies must document and disclose — rather than on outcome-based safety standards.
The law's key requirements: covered companies (annual revenue over $500 million, developing frontier AI systems above a compute threshold) must publish a publicly available safety and security protocol explaining how they identify and mitigate catastrophic risks; must conduct and document risk assessments before deploying covered systems; must report safety incidents to the California Attorney General; and must maintain whistleblower protections preventing retaliation against employees who raise safety concerns through internal or external channels.
The compute threshold and revenue threshold combined to focus the law on the frontier development community — primarily companies based in California such as OpenAI, Anthropic, Google DeepMind, and Meta AI.
Newsom's signing statement framed the law as a floor, not a ceiling, and signaled that California would monitor implementation before considering whether additional requirements were warranted. The Attorney General's office was given enforcement authority; fines were set at up to $1 million per violation with no cap on aggregate liability for a course of conduct.
The January 1, 2026 effective date gave covered companies approximately three months to develop or update their safety frameworks and establish reporting procedures.
Why It Matters
SB 53 established that US states could and would legislate for frontier AI in the absence of federal action. The law immediately drew attention as a compliance framework that every frontier lab had to take seriously, given California's role as the home state of most major AI developers.
The whistleblower protection provisions were arguably the law's most practically significant element. By creating legally enforceable protections for employees who report safety concerns — shielded from NDAs and other contractual restrictions — the law created a new accountability mechanism that did not depend on public disclosure or regulatory initiative. Employees at covered companies now had explicit statutory authority to report concerns externally without fear of retaliation.
The law also created a data point in the federal-state preemption debate that would intensify in late 2025. California's action demonstrated that if Congress failed to act, states would fill the vacuum — a pattern that historically creates pressure for federal preemption legislation or federal action to establish uniform standards.
Critics argued SB 53 was too weak to matter: its transparency requirements created disclosure obligations but no performance standards, and catastrophic risk assessments could be structured to reach conclusions companies preferred. Supporters argued that public safety frameworks, even if initially performative, create accountability surfaces that become more meaningful as regulatory capacity develops.