The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act ( also known as SB-1047 ), the most recent step in the saga of regulating Silicon Valley, was passed by the California Appropriations Committee on Thursday.
Before it becomes law, the state legislature and the state Senate had still vote against it.
What is SB-1047?
SB-1047, which is widely known as California’s AI Act and carefully monitored across the nation for potential precedent-setting conceptual AI state laws:
- Build safety and security standards for included AI models.
- Ensure such designs may be shut down completely.
- Stop the distribution of models capable of causing what the action constitutes as” critical harm.”
- Keep an accountant present to check conformity with the action.
In summary, the bill establishes a framework to stop generative AI models from causing significant harm to humanity, such as through nuclear war or bioweapons, or from causing more than$ 500 million in losses as a result of a cybersecurity incident.
The act defines” covered models” as those using computing power greater than 10^26 integer or floating-point operations — the cost of which exceeds$ 100 million, during training.
Anthropic output is included in the most recent work.
Sen. Scott Wiener, D-Calif., the major costs author, Sen. Scott Wiener, D-Calif., also proposed some changes to the version of the bill that was approved on Thursday.
Anthropic efficiently requested the state to change the language in the bill, arguing that the state’s attorney general may take legal action against businesses that violate the law. Companies are no longer required to make safety evaluation results public when they are threatened with fraud in the most recent version. Otherwise, the developers will need to submit comments, which do not have the same lawful pounds.
Additional alterations include:
- The change in language from “reasonable treatment” to “reasonable guarantee” of safety has occurred.
- There is a caveat: AI researchers who spend less than$ 10 million tweaking an open-source covered model are not regarded as the model’s developers.
Observe: Anthropic and OpenAI have conducted their own research into how slanted content is created by generative AI.
The development of a Frontier Model Division, an organization that would manage the AI business, is no longer in the bill’s purview. A Board of Frontier Models with a focus on future-oriented health advice and reviews will be established within the present Government Operations Agency.
While Anthropic contributed to the legislation, other significant companies like Google and Meta have voiced their opposition. Andreessen Horowitz, a venture capital firm known as a16z that is behind some AI companies, has violently opposed SB-1047.
Why is SB-1047 questionable?
According to some business and Congres representatives, the act may restrict innovation and make it difficult to use open-source AI models. Among the president’s detractors was Hugging Face co-founder and CEO Clement Delangue, when noted by Fast Company.
According to a study conducted in April by the pro-regulation think tank Artificial Intelligence Policy Institute, the majority of Californians voted in favor of the bill as it stood at the time, with 70 % of voters saying that “future strong AI designs may be used for dangerous functions.”
The act is also backed in full by researchers Geoffrey Hinton and Yoshua Bengio, who are known for their groundbreaking work in profound understanding. The action did “protect the public”, Bengio wrote in an op-ed in Fortune on Aug. 15.
Eight of the 52 Californians who are members of the Congressional delegation signed a letter on Thursday, warning that the act would” create unneeded dangers for California’s sector with very little profit to public health.” They contend that since government organizations like NIST are also working on developing those requirements, it’s too early to develop standardized assessments for AI.
They suggest the concept of vital damage may be false, saying the bill has gone away by focusing on large-scale disasters, such as nuclear weapons, while “largely ignoring observable AI risks like misinformation, discrimination, nonconsensual deepfakes, climate impacts, and workforce displacement”.
Under the California Whistleblower Protection Act, SB-1047 provides precise protection for whistleblowers from Artificial companies.
The work raises the difficulties of achieving a balance between innovation and rules.
” We can improve both technology and security, the two are not socially exclusive”, Wiener wrote in a common assertion on Aug. 15. We accepted a number of pretty fair amendments, and I believe we have addressed the main concerns raised by Anthropic and many others in the market, even though the amendments do not fully reflect the changes requested by Anthropic, a world leader in both technology and health.
He noted that Congress is “gridlocked” on AI rules, but” California must work to get ahead of the immediate risks presented by rapidly advancing AI while also developing innovation”.
Next, the Assembly and Senate will need to pass the legislation. If approved, the bill will then be considered by Gov. Gavin Newsom, likely in late August.