California Governor Gavin Newsom (D-Calif.) has vetoed an artificial intelligence (AI) safety bill that would have provided the first-in-the-US safety measures for large AI models.California Governor Gavin Newsom (D-Calif.) has vetoed an artificial intelligence (AI) safety bill that would have provided first-in-the-US safety measures for large AI models.
The bill, known as SB 1047, mandated third-party testing and whistleblower protections to prevent "severe harm" like mass casualties or property damage over $500M.The bill, known as SB 1047, mandated third-party testing and whistleblower protections to prevent "severe harm" such as mass casualties or property damage over $500M.
Newsom blocked the bill, authored by Democratic State Senator Scott Wiener, stating it "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data."Newsom blocked the bill, authored by Democratic State Senator Scott Wiener, stating it, "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data."
TheThis bill risked stifling innovation, particularly among smaller AI companies and open-source developers. It focused on hypothetical risks rather than practical, targeted safety solutions. Newsom's decision preserves California's leadership in AI by promoting a more balanced, science-based approach to regulation that supports innovation while effectively addressing safety concerns.
Newsom has retreated from a crucial first step toward meaningful regulation of potentially dangerous AI technologies. SB 1047 sought to regulate AI to prevent catastrophic harm while encouraging responsible development. Newsom's veto is a missed opportunity, compromising proactive governance and preserving the status quo.