California governor halts AI safety bill

California Governor Gavin Newsom has vetoed a proposed artificial intelligence (AI) security bill, SB 1047, arguing that it could hinder innovation and drive AI developers out of the country.

Authored by Senator Scott Wiener, the bill aimed to implement some of the earliest AI regulations in the US. It faced considerable resistance from major technology companies, including OpenAI, Google, and Meta, which warned that it could hamper the growth of transformative technology.

The bill required AI developers to conduct security tests on many of the most advanced AI models that cost more than $100 to develop or use a defined amount of computing power. It also mandated hiring third-party auditors to assess their safety practices.

Additionally, the bill required AI software developers in the state to outline solutions for turning off AI models in the event of an error, including a ‘kill switch’ function to shut down AI systems if they pose a threat.

The bill would have established the Board of Frontier Models, a state entity, to oversee the development of these models.

In a statement, Newsom said SB 1047 “magnified the conversation about threats that could emerge from the deployment of AI,” but added that the bill did not consider whether AI systems are deployed in high-risk environments, involve critical decision-making, or use sensitive data.

He also stressed that the bill did not distinguish between high-risk and low-risk AI uses. “The bill applies stringent standards to even the most basic functions, so long as a large system deploys it,” he said.

Despite the veto, Newsom emphasised the need to protect the public from the risks of AI and announced his intention to work with experts to develop appropriate safeguards.

“Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility. Safety protocols must be adopted,” he said.

Senator Wiener expressed disappointment with the ruling, claiming that it leaves AI companies free from regulatory oversight. “This decision gives AI developers the green light to continue operating without binding restrictions from US policymakers, especially as Congress struggles to regulate the tech industry,” Wiener said.

OpenAI, Adobe, and Microsoft are currently part of the Coalition for Content Provenance and Authenticity, which helped create C2PA metadata, an open technical standard providing publishers, creators, and consumers with the ability to trace the origin of different types of media.

California state lawmakers have been trying to introduce several measures designed to ensure that all algorithmic decisions are impartially tested and to protect the intellectual property of deceased individuals from exploitation by AI companies.

In recent weeks, the governor has signed 17 bills, including one to fight misinformation and AI-generated ‘deep fakes.’



Share Story:

Recent Stories


Bringing Teams to the table – Adding value by integrating Microsoft Teams with business applications
A decade ago, the idea of digital collaboration started and ended with sending documents over email. Some organisations would have portals for sharing content or simplistic IM apps, but the ways that we communicated online were still largely primitive.

Automating CX: How are businesses using AI to meet customer expectations?
Virtual agents are set to supplant the traditional chatbot and their use cases are evolving at pace, with many organisations deploying new AI technologies to meet rising customer demand for self-service and real-time interactions.