SB 1047, a piece of legislation in California that seeks to regulate artificial intelligence (AI) models, has reached the assembly appropriations committee and has attracted support and criticism from AI researchers.
The legislation would require AI systems to be tested for safety before their release and come with built-in safety guardrails, and will only apply to systems that have cost more than $100M in computing power to develop (a condition no model has yet met). It will also allow the state attorney general to sue AI makers for damages caused by their products.
AI developers tout the need for safety regulations for good publicity before turning around and stifling any attempt to have it come to fruition. A well-funded group of developers and investors have spread misinformation and fear about the bill in order to shield themselves from the harms of their products and rake in cash without any oversight. This bill is full of common-sense provisions that are popular amongst researchers and the public, and are necessary to mitigate the long-term harms of AI.
If SB 1047 passes, regulations will snuff out the AI industry in the US and allow countries like China to dominate the AI sphere. This bill is based almost entirely on hypothetical, worst-case scenario harms that will never materialize, and rely more on fear than sound reasoning. The bill's liability clauses could penalize everyone who does any work with AI and end development entirely. We need rational regulation made on a national level in consultation with business leaders.