California Senate Bill 1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was first introduced to the state Senate on Feb. 7 and by May 16 passed the state Senate with a 32-1 vote.
Most people I asked building companies in the AI space were not aware of the bill’s existence until last week. All were surprised at how quickly the bill is moving forward especially given the European Union’s AI Act, which regulates risk by use case and not infrastructure as this bill proposes to do, took three years of stakeholder engagement and collaboration to pass a few months ago.
Though it heads to an Assembly hearing later this month, its movement has left a lot of people working in the field feeling blindsided and concerned that what is being proposed will make AI innovation come to a halt in California, driving much-needed revenue and foot traffic out of the downtown metro corridors and to other geographies.
There are a few core concerns that keep surfacing:
The bill does not actually protect people from harm. SB 1047 is written to regulate the potential future hazards that come with being able to train large language models on more data that most of us will ever have access to in our lifetimes. However, the size of the LLM is not the determining factor for harm capacity. Rather, the determining factor for harm is how the model is used.
If we are focusing on harm reduction, we should look at the myriad of already known use cases and regulate them. Bad actors are going to do the same things they are doing today, except with much better tools.
Recommended for you
Definitions and penalties are arbitrary and easily bypassed. What’s the difference in doing $455 million worth of harm versus $550 million? What’s the difference in modifying a model 22% versus 26%? The only difference in practice for this bill is who holds the “big P” prison bag at the end of the day — the developer or the bad actor. I won’t belabor the model size definition too much more, but developers will be pretty cheaply making very complex and large LLMs that still live beneath the compute power of 10^26 integer or floating-point operations and $100 million cost threshold in single-digit years.
The solution to keeping open source development alive is to hire consultants and there is no funding for it. The bill’s solution to saving open source development (which is where much of AI innovation has come from) is to hire consultants to build a fully contained and state-owned and run public cloud computing cluster called CalCompute. Very simply, this is an extremely expensive venture with high ongoing expense. It is starting and maintaining a state-of-the-art connected hardware business with taxpayer dollars.
Given the amount of personally identifiable information that will likely be stored in research data, enterprise-grade security practices will need to be maintained. There is the initial setup (GPUs, racks and networking/security/power/cooling/continuous monitoring costs), there will need to be an instance in the EU to comply with their privacy laws that require EU data to be stored physically in the EU because American researchers collaborate at the global level all the time. Infrastructure to support archiving and delete will be required, so you’ll need staff software engineers. Staff to maintain and upgrade the infrastructure (because hardware is becoming obsolete faster and faster) and work with researchers to build and deploy models will be required. Because the bill states that this work needs to have budget appropriated in a budget act, the proposed solution to a major concern is merely a contemplation.
The high administrative cost and burden means only the largest enterprises will even be able to play. Andrew Ng is one of the most respected names in the field of AI, and I think he says it best in this Financial Times article. “If someone wanted to come up with regulations to stifle innovation, one could hardly do better,” said Ng, a renowned computer scientist who led AI projects at Alphabet’s Google and China’s Baidu, and who sits on Amazon’s board. “It creates massive liabilities for science-fiction risks, and so stokes fear in anyone daring to innovate.”
There is time left before the Assembly votes on SB 1047, and in that time I hope these important questions are grappled with. I don’t see many arguing with a kill switch. I don’t see anyone saying the threat to harm is zero. The AI community broadly supports oversight — but as it stands today, it is much easier to go to another state to build than to do this dance.
Annie Tsai is chief operating officer at Interact (tryinteract.com), early stage investor and advisor with The House Fund (thehouse.fund), and a member of the San Mateo County Housing and Community Development Committee. Find Annie on Twitter @meannie.
You forgot to mention state senator Scott Wiener is sponsoring SB 1047... that may be enough for some folks to oppose this bill. According to the senator, SB 1047 "gives us an opportunity to apply hard lessons learned over the last decade, as we’ve seen the consequences of allowing the unchecked growth of new technology without evaluating, understanding, or mitigating the risks."
He's not wrong - A few months ago at Progress Seminar I was asked what kind of regulation would be reasonable for AI, and at the time the EU AI Act had just been passed and I felt it struck a nice balance for risk definition, oversight, and administration. The foundational issue is regulating the model size as opposed to use case.
Keep the discussion civilized. Absolutely NO
personal attacks or insults directed toward writers, nor others who
make comments. Keep it clean. Please avoid obscene, vulgar, lewd,
racist or sexually-oriented language. Don't threaten. Threats of harming another
person will not be tolerated. Be truthful. Don't knowingly lie about anyone
or anything. Be proactive. Use the 'Report' link on
each comment to let us know of abusive posts. PLEASE TURN OFF YOUR CAPS LOCK. Anyone violating these rules will be issued a
warning. After the warning, comment privileges can be
revoked.
Please purchase a Premium Subscription to continue reading.
To continue, please log in, or sign up for a new account.
We offer one free story view per month. If you register for an account, you will get two additional story views. After those three total views, we ask that you support us with a subscription.
A subscription to our digital content is so much more than just access to our valuable content. It means you’re helping to support a local community institution that has, from its very start, supported the betterment of our society. Thank you very much!
(4) comments
Good morning, Annie
You forgot to mention state senator Scott Wiener is sponsoring SB 1047... that may be enough for some folks to oppose this bill. According to the senator, SB 1047 "gives us an opportunity to apply hard lessons learned over the last decade, as we’ve seen the consequences of allowing the unchecked growth of new technology without evaluating, understanding, or mitigating the risks."
Is he wrong?
He's not wrong - A few months ago at Progress Seminar I was asked what kind of regulation would be reasonable for AI, and at the time the EU AI Act had just been passed and I felt it struck a nice balance for risk definition, oversight, and administration. The foundational issue is regulating the model size as opposed to use case.
Thanks.
Interesting Weiner would be in support of this given the recovery of San Francisco is highly dependent on AI companies locating in San Francisco.
Welcome to the discussion.
Log In
Keep the discussion civilized. Absolutely NO personal attacks or insults directed toward writers, nor others who make comments.
Keep it clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
Don't threaten. Threats of harming another person will not be tolerated.
Be truthful. Don't knowingly lie about anyone or anything.
Be proactive. Use the 'Report' link on each comment to let us know of abusive posts.
PLEASE TURN OFF YOUR CAPS LOCK.
Anyone violating these rules will be issued a warning. After the warning, comment privileges can be revoked.