California Governor Newsom Vetoes AI Safety Bill SB 1047
Summary
California Governor Gavin Newsom vetoed SB 1047, a bill that would have imposed safety testing requirements and liability provisions on developers of large AI models. The veto, following intense lobbying from the AI industry, became a pivotal moment in the debate over state-level AI regulation and the political power of the AI industry.
What Happened
On September 29, 2024, Governor Gavin Newsom vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which had been authored by State Senator Scott Wiener and passed both chambers of the California legislature. The bill would have required developers of AI models trained above a compute threshold (approximately 10^26 FLOP or costing more than $100 million to train) to conduct safety testing, implement a "kill switch," and face legal liability if their models caused "critical harm."
The bill had become one of the most contentious pieces of AI legislation ever proposed, generating intense debate within the AI community. Supporters included Geoffrey Hinton, Yoshua Bengio, and other prominent AI safety researchers who argued that frontier AI models posed genuine catastrophic risks that warranted regulatory oversight. Opponents included Anthropic (which initially opposed the bill before adopting a more nuanced position), Meta, Google, and much of Silicon Valley, who argued that the bill's liability provisions would stifle innovation and drive AI development out of California.
In his veto message, Newsom argued that the bill "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or the use of sensitive data" — criticizing its focus on model size rather than application context. He simultaneously signed several other AI-related bills and pledged to work with experts on more targeted AI safety legislation.
Why It Matters
The SB 1047 veto was a landmark moment in AI policy for several reasons. It demonstrated the formidable political power of the AI industry, which mounted an aggressive lobbying campaign against the bill. It revealed deep divisions within the AI safety community itself — some organizations that advocated loudly for AI regulation opposed this particular bill because of its specific mechanisms.
The veto also highlighted a fundamental tension in AI regulation: whether to regulate based on model characteristics (size, compute, capability) or based on application context (how the model is deployed and what it's used for). Newsom's veto message explicitly sided with the application-based approach, which most of the industry preferred because it placed obligations on deployers rather than developers.
For the open-source community, the bill's veto was a relief — SB 1047's provisions would have been particularly burdensome for open-source developers who, unlike commercial companies, couldn't easily monitor or control downstream uses of their models. But the relief was tempered by the knowledge that similar bills would continue to be introduced, and that the underlying safety concerns hadn't been addressed.