California Gov. On September 29, Gavin Newsom vetoed SB 1047, a questionable AI rules expenses. The act “falls short of providing a flexible, complete answer to curbing the possible fatal dangers”, the governor’s office wrote. Alternative measures were included in the announcement to support California’s AI industry and prevent harms.
Newsom: The bill” could give the public a false sense of security.”
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, known as SB 1047, would have been the strongest generative AI regulation in the nation. It sought to impose strict safety and security standards on large AI developers, protect industry whistleblowers, and appoint them to be able to completely shut down their models.
The bill was approved by the California State Assembly and Senate in August.
Because the bill emphasizes large, expensive models rather than smaller ones in high-risk situations, Newsom claimed in his statement that SB 1047 “establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.”
” While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data”, Newsom wrote. Instead, the bill imposes stringent standards on even the most fundamental functions as long as a sizable system uses them. I do n’t think this is the best way to protect the public from actual threats posed by technology.
However, on Sept. 29, the governor pointed out several new initiatives related to generative AI:
- The Office of Emergency Services of California will need to expand their current work to include potential threats to generative AI.
- The state will convene a group of AI experts and academics, including Stanford University professor and AI “grandmother” Fei-Fei Li, to “help California develop workable guardrails”.
- The state will convene academics, labor stakeholders, and the private sector to “explore approaches to use GenAI technology in the workplace”.
Does California’s AI bill go too far or not far enough?
Sen. Scott Wiener (D-Calif ) is the primary author of SB 1047. He criticized Newsom’s decision in an X post on Sunday.
” This veto is a setback for everyone who believes in oversight of large corporations that are making important decisions that affect the safety and welfare of the public and the planet’s future,” he wrote.
” The Governor’s veto message lists a range of criticisms of SB 1047: that the bill does n’t go far enough, yet goes too far, that the risks are urgent but we must move with caution”, Wiener wrote in a formal response to Newsom’s decision. SB 1047 was created by some of the world’s most influential AI experts, and any claim that it is not based on empirical evidence is patently absurd is absurd.
California had been closely monitored by the federal government, as it might serve as a case study for AI regulation. So far, it has largely refrained from implementing broad or specific AI regulations, opting instead for voluntary agreements.
SEE: The United States government ratified a global agreement that mandates that AI be compliant with human rights and be supervised.  ,
Companies including OpenAI, Meta, and Google opposed SB 1047 for slowing innovation or leveraging “technically infeasible requirements“. Elon Musk and Anthropic, two other tech figures who contributed to the draft of the bill, were also in favor of how the bill addresses potential AI risks.
What does the veto mean for businesses?
The veto means that large-scale AI projects in California will be less subject to state scrutiny than they might otherwise, according to business stakeholders involved in AI strategy. Other AI regulations, such as the prohibition of deepfakes during election season and the use of AI in industries like healthcare and insurance, have been approved by Newsom.
Additionally, the veto allows for significant AI models to continue to be developed in California without “kill switches.” Organizations can establish their own AI governance as they please. In August, Deloitte found “balancing innovation with regulation” was the most important ethical issue in AI deployment and development among their polled organizations.