The latest development in the saga of regulating Silicon Valley was the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act ( also known as SB-1047 ), which was passed by the California Appropriations Committee on Thursday.
Before it becomes law, the state Assembly and Senate had also vote against the bill.
What is SB-1047?
SB-1047, which is widely known as California’s AI Act and carefully monitored across the nation for potential precedent-setting state laws around relational AI, contains some rules for AI developers:
- Build safety and security standards for included AI models.
- Ensure such versions may be shut down completely.
- Prevent the supply of models worthy of what the action refers to as” essential harm.”
- Keep an accountant present to check compliance with the law.
In summary, the bill establishes a framework to stop generative AI models from causing significant harm to humanity, such as through nuclear war or bioweapons, or from causing more than$ 500 million in losses as a result of a cybersecurity incident.
The act defines” covered models” as those using computing power greater than 10^26 integer or floating-point operations — the cost of which exceeds$ 100 million, during training.
Anthropic output is included in the most recent work.
The bill’s version that was approved by main bill author Sen. Scott Wiener, D-Calif. was approved by AI maker Anthropic and included some changes.
Anthropic effectively requested the state to change the language in the bill, arguing that the state’s attorney general may take legal action against businesses that violate the law. Companies are no longer required to make safety evaluation results public when they are threatened with fraud in the most recent version. Otherwise, the developers will need to submit comments, which do not have the same lawful pounds.
Additional changes include:
- The change in language from “reasonable maintenance” to “reasonable guarantee” of safety has occurred.
- An exception applies where AI researchers who spend less than$ 10 million tweaking an open-source covered model are not regarded as the model’s developers.
Observe: Anthropic and OpenAI have conducted their own research into how slanted content is created by generative AI.
The design of a Frontier Model Division, an organization established to regulate the AI business, is no longer required by the costs. The recent Government Operations Agency will have a Board of Frontier Models with an emphasis on future-oriented safety assistance and audits.
While Anthropic contributed to the legislation, other significant institutions, including Google and Meta, have expressed opposition to it. Andreessen Horowitz, a venture capital firm known as a16z that is behind some AI companies, has violently opposed SB-1047.
Why is SB-1047 questionable?
According to some industry and Congres representatives, the act may restrict development and make it especially challenging to use open-source AI models. Among the president’s detractors was Hugging Face co-founder and CEO Clement Delangue, when noted by Fast Company.
According to a study conducted in April by the pro-regulation think tank Artificial Intelligence Policy Institute, the majority of Californians voted in favor of the bill as it stood at the time, with 70 % of voters saying that “future strong AI designs may be used for dangerous reasons.”
The act is also being publicly supported by experts Geoffrey Hinton and Yoshua Bengio, who are known as the “godfathers of AI” for their pioneer work on serious understanding. The action did “protect the public”, Bengio wrote in an op-ed in Fortune on Aug. 15.
Eight of the 52 Californians who are members of the Congressional delegation signed a letter on Thursday, warning that the act would” create unneeded dangers for California’s sector with very little profit to public health.” They contend that since government organizations like NIST are also working on developing those standards, it is too early to develop standardized examinations for AI.
They suggest the concept of vital damage may be false, saying the bill has gone away by focusing on large-scale disasters, such as nuclear weapons, while “largely ignoring observable AI risks like misinformation, discrimination, nonconsensual deepfakes, climate impacts, and workforce displacement”.
Under the California Whistleblower Protection Act, SB-1047 provides specific shelter for AI company reporters.
The work raises the difficulties of achieving a balance between innovation and rules.
” We can improve both technology and security, the two are not socially exclusive”, Wiener wrote in a common assertion on Aug. 15. We accepted a number of pretty fair amendments, and I believe we have addressed the main concerns raised by Anthropic and many others in the market, even though the amendments do not fully reflect the changes requested by Anthropic, a world leader in both technology and health.
He noted that Congress is “gridlocked” on AI rules, but” California must work to get ahead of the immediate risks presented by rapidly advancing AI while also developing innovation”.
Next, the Assembly and Senate will need to pass the legislation. If approved, the bill will then be considered by Gov. Gavin Newsom, likely in late August.