• Shortlysts
  • Posts
  • California Passes Landmark Bill to Regulate Artificial Intelligence Companies

California Passes Landmark Bill to Regulate Artificial Intelligence Companies

California passes first-of-its-kind regulatory legislation focused on the safety of artificial intelligence programs.

What Happened?

This week, Governor Gavin Newsom signed into law California’s State Senate Bill 53, also known as the Transparency in Frontier Artificial Intelligence Act. The legislation, which is the first of its kind in the United States, places new AI-specific regulations on the AI industry, requiring AI companies to fulfill transparency requirements and report AI-related safety incidents.

Governor Newsome issued a statement saying, ‘California has proven we can establish regulations to protect our communities while ensuring the growing AI industry continues to thrive.’ The new law puts a specific emphasis on the safety of new AI programs. 

Why it Matters

The new law will likely have global ramifications because thirty-two of the top fifty AI companies in the world are headquartered in California. Governor Newsom emphasized that aspect of the law during the signing ceremony by saying that California’s ‘status as a global leader in technology allows us a unique opportunity to provide a blueprint for AI policies beyond our borders.’

What makes the new law significant is the focus on AI-related safety. According to the author of the original bill, California state Sen. Scott Wiener, when it comes to AI, ‘we have a responsibility to support innovation while putting in place commonsense guardrails to understand and reduce risk.’ The bill passed despite increased lobbying efforts by big tech companies to reduce the amount of AI regulations being imposed by state governments.

California’s law could soon become the basis for similar legislation at the federal level. According to NBC News, Senators Josh Hawley and Richard Blumenthal have proposed a federal bill that would require leading AI developers to ‘evaluate advanced AI systems and collect data on the likelihood of adverse AI incidents.’

In its current form, the federal bill would create an Advanced Artificial Intelligence Evaluation Program to be located within the U.S Department of Energy. Participation in the evaluation program would be mandatory, much like California SB 53’s mandatory transparency and reporting requirements.

The potential risks of AI span a wide range of possible threat vectors. Israel has reportedly been using AI programs to speed up its identification and targeting of enemy forces in Gaza, although Israeli officials have always insisted that any attack orders must first receive human approval before being carried out. At the individual level, chat programs have been accused of contributing to the deaths or suicides of an unknown number of users worldwide. 

How it Affects You

Technology is always ahead of the law, and that is certainly the case with AI. New technologies can be created and implemented much faster than new laws can be passed to govern their use. In just the past five years, AI has undergone a rapid evolution in capabilities and applications, and it now appears that state and federal lawmakers in the United States are finally catching up.