Potentially, the most important ten minutes of your life this year1 is writing a letter to Gavin Newsom to tell him not to veto Senate Bill 1047. If you’re already convinced about the importance of Senate Bill 1047, click on this link for an explanation of how to send a letter. (You do NOT have to mail anything. You should send it in an email.)
The rest of this post is convincing you to send a letter, especially if you LIVE IN CALIFORNIA or ARE NOT PARTICULARLY INTERESTED IN TECH or BOTH. If you are in one of those groups, please read this article!
Senate Bill 1047 is a Californian state bill that is the first bill specifically regulating artificial intelligence. If a company spends more than $100 million2 in training a cutting-edge AI model, they will have to create and follow a plan to make sure it’s safe. If they don’t, and their model causes $500 million in damages or a “mass casualty event”, they will be held liable. It passed the legislature, but Californian governor Gavin Newsom can still veto it.
Look. I have a long, long history of being skeptical of claims about the imminent Singularity or robot apocalypse. You do not have to be worried that AI is going to turn us all into paperclips in five years to support Senate Bill 1047.
You know what else might cause a mass casualty event or $500 million in damages? Nuclear power plants. Vaccines. Fucking airplanes. No one is like “well, I don’t believe in the imminent airplanepocalypse, therefore we shouldn’t have any regulation of airplanes, and it would be fine if drunk people who had never been in an airplane before were routinely flying over major metropolitan areas in planes missing half their instruments and one of their engines.”
Generative AI is a powerful technology. It might just (“just”) automate writing routine emails, speed up programming, and of course make it easier to cheat on your homework. But serious people worry that it could automate a large number of remote-work jobs, ranging from contact-center work to data analysis. Any technology this powerful needs safeguards—before people die.
Senate Bill 1047 is absurdly light regulation. People don’t believe me when I say how light the regulation is. All it says is that you need to test your models to make sure that they don’t kill an enormous number of people, and if it looks like your model is going to kill an enormous number of people you need to take some steps to mitigate that, and if you don’t you will be held liable. It provides a flexible framework into which we can put more specific legislation as we understand AI better.
And like… if it turns out that “AI can kill people'“ is sci-fi nonsense, then what’s the problem? It seems like proving that it won’t kill people would be very easy?
SB-1047 is popular among current and former employees of leading AI companies, who are rightly concerned about potential damage from AI. It is also supported by a number of groups you don’t normally associate with Singulatarian speculation. The National Organization for Women, for example, supports SB-1047, citing the likely disproportionate effect of AI-related disasters on marginalized groups such as women. So does the Screen Actors’ Guild. In its letter, the Screen Actors’ Guild points out the damage done to actors through pornographic deepfakes and unauthorized usage of their likenesses. It correctly realizes that “no causing hundreds of millions of dollars in damages” is the first, basic step, on which we can build robust regulation of artificial intelligence. “AI safety” and “AI ethics” aren’t opposed—they’re the same damn thing.
And the opponents? Bad company to find yourself in! Meta, OpenAI, and Google, in the ancient tradition of large corporations everywhere, think their relentless pursuit of profit shouldn’t be stopped by little details like “human wellbeing” and “lives.” Much of the opposition is astroturf funded by Trump-supporting venture capitalists Andreessen Horowitz, who believe—this is not a strawman—that because technology is good nearly all possible technology is good and it is anti-science to regulate it.
You don’t have to believe in anything strange or science-fictional to support SB-1047; you just have to believe that powerful technology should have oversight to prevent the worst outcomes. SB-1047 would still leave artificial intelligence less regulated than airplanes. But it’s an important first step. The time to regulate AI is before the first mass casualty event—just like the time to regulate carbon emissions was before the Earth warmed two degrees.
The Center for AI Safety Action Fund has asked:
The most useful thing you can do is write a custom letter. To do this:
Make a letter addressed to Governor Newsom using the template here.
Save the document as a PDF and email it to leg.unit@gov.ca.gov.
In writing this letter, we encourage you to keep it simple, short (0.5-2 pages), and intuitive. Complex, philosophical, or highly technical points are not necessary or useful in this context – instead, focus on how the risks are serious and how this bill would help keep the public safe…
Supporters from California are especially helpful, as are parents and people who don’t typically engage on tech issues.
Thank you for reading this.
If you live in California and your state is already going for Kamala Harris.
More than was spent on training ChatGPT.
> Andreessen Horowitz, who believe—this is not a strawman—that because technology is good all possible technology is good and it is wrong to regulate it.
That is a strawman. I read the Techno-Optimist Manifesto and it doesn't say that all possible technology is good nor that regulation is always wrong. For what it's worth, I had ChatGPT and Claude read the manifesto and asked whether that's a strawman, and ChatGPT said it's a strawman, and Claude said it's "a significant oversimplification and mischaracterization of the essay's content, leaning towards being a strawman argument."
(I don't mean that LLM judgment is a clinching argument, more just noting that I did the basic diligence of checking whether my objection is easily shown to be mistaken before I clog up a comment section with it.)
I will not be writing a letter in support of this bill. The direct effects of SB 1047 are unlikely to cause any harm, so I have also not written a letter in opposition to the bill, but if it does, as this piece suggests, serve as a necessary first step towards the kind of regulation regime that has killed thousands by slowing the distribution of vaccines, I will be regretting my decision to not try help nip it in the bud. There should be no regulation of the specific content which publicly available generative AI is capable of producing. Pornographic deepfakes are morally disgusting, but preventing them is not worth giving up (through the banning of open source image models, for example) the ability for generative AI to serve the interests of individual humans without corporations and governments interfering.