The bill would have required testing of AI models to ensure they don’t lead to mass death, attacks on public infrastructure, or cyberattacks.
If Newsom had signed the bill into law, it would have required testing of AI models to ensure they don’t lead to mass death, attacks on public infrastructure, or cyberattacks.
The legislation would have also created whistleblower protections, as well as a public cloud for the development of AI for the public good.
SB 1047 would have also created the Board of Frontier Models, a California state entity, to monitor the development of AI models.
He also said the bill regulates AI in a blanket fashion and lacks empirical analysis of the real threats posed by AI.
“A California-only approach may well be warranted—especially absent federal action by Congress—but it must be based on empirical evidence and science,” he wrote.
Newsom also took issue with the fact that the bill applies mainly to expensive AI models—those that cost more than $100 million to develop or require more than a certain quantity of computing power to train. In the governor’s view, inexpensive AI models could pose just as much of a threat to the public good or critical infrastructure.
“Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047,” Newsom wrote.
The governor said the legislation could slow down advancements in the interest of the public good. For that reason, Newsom believes SB 1047 could give the public “a false sense of security” when it comes to AI.
Additionally, Newsom said SB 1047 fails to consider whether an AI model is deployed in high risk environments or uses sensitive information.
“Instead, the bill applies stringent standards to even the most basic functions—so long as a large system deploys it,” Newsom wrote. “I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
Newsom said that he believes California must pass a law to regulate AI, and that he remains “committed to working with the Legislature, federal partners,” and others in the state and nation to “find the appropriate path forward, including legislation and regulation.”
San Francisco Democratic Sen. Scott Wiener, the bill’s author, said the veto was “a missed opportunity for California to once again lead on innovative tech regulation” and “a setback for everyone who believes in oversight of massive corporations that are making critical decisions” in regards to the use of AI.
Supporters of the bill include Elon Musk, AI startup Anthropic, the Center for AI Safety, tech equity nonprofit Encode Justice, the National Organization for Women, as well as whistleblowers from AI company OpenAI, among others.
Elon Musk’s record of criticism for California lawmakers made his endorsement of SB 1047 a surprise.
Other supporters include Service Employees International Union, the Latino Community Foundation, and SAG-AFTRA.
Opponents of the bill included tech behemoths such as Google, Meta, and OpenAI, who said the bill would undermine the California economy and the AI industry.
Several members of the California congressional delegation asked Newsom to veto the bill before the California Legislature passed it last month in a landslide vote.
“We are grateful to Governor Newsom for the veto of SB 1047. Regulatory efforts to promote #AI safety are critical, but SB 1047 missed the mark in key ways,” wrote Chamber President & CEO Jennifer Barrera. “As a consequence, the bill would have stifled AI innovation, putting California’s place as the global hub of innovation at tremendous risk.”
Other bills require businesses to share information about how they use data to train generative AI models, as well as to supply tools so consumers can see whether or not a certain piece of media is made by humans or AI.
The SB 1047 veto derails the California Legislature’s efforts to align the state’s AI regulation with the European Union’s AI Act.
In September 2023, Newsom issued an executive order instructing California agencies to perform risk assessments of the threats and vulnerabilities AI poses to California’s critical infrastructure.
The U.S. A.I Safety Institute, under the National Institute of Science Technology, is creating guidance on security risks. Newsom said the federal agency’s approach entails “evidence-based approaches” to safeguarding the public.