All My Coverage of California AI Safety Bill SB 1047 (in One Place)
Governor Gavin Newsom killed the bill, but the fight over it sketches a blueprint for the AI safety battles to come
On September 29, California Governor Gavin Newsom vetoed SB 1047, which would have been the most significant AI safety legislation in the US.
I spent nearly three months reporting full-time on the bill and wanted to consolidate all my coverage here, for the SB 1047 completionist or someone just learning about it now. Even though the bill is dead, I think the fight over it is a really instructive preview into the future of AI politics and maybe even politics more broadly.
If you’re just going to read just one article, my Jacobin story is the most comprehensive and up-to-date.
I also wrote an op-ed for The New York Times about the urgent need for a new law to protect AI whistleblowers, but this only obliquely touches on SB 1047.
Here’s how the Jacobin piece opens:
The whole story of SB 1047 is long and complicated, but the gist of it is actually quite simple. By and large, the artificial intelligence industry does not want to be regulated. It especially doesn’t want to be liable for harms caused by its AI models. Since SB 1047 regulates the industry and uses liability to enforce those regulations, much of the industry doesn’t want the bill.
Industry insiders can’t say this explicitly, so they make other arguments instead (often arguing against versions of the bill that don’t exist). It’s not super surprising that these arguments don’t really hold up to scrutiny.
And here’s what the bill would have done:
SB 1047 mainly mandated that the largest AI developers implement safeguards to mitigate catastrophic risks. If a covered company’s AI model causes a disaster, defined as “mass casualties” or $500 million or more in damage and the company’s safeguards were not in line with industry best practices and/or relevant government standards, then the company could be liable for damages and additional financial penalties. The bill also included protections for AI whistleblowers.
It would have been the first law in the United States to mandate that these companies implement safeguards to mitigate catastrophic risks, breaking from the tradition of using the voluntary AI safety commitments preferred by the industry and national lawmakers.
There’s plenty more in the full article!
All my SB 1047 stories in chronological order
The bolded articles are the most essential to understanding the overall story, in my opinion.
8/15/24 - The Nation - California’s AI Safety Bill Is a Mask-Off Moment for the Industry
8/28/24 - The American Prospect - Tech Industry Uses Push Poll to Stop California AI Bill
9/5/24 - TIME - Scott Wiener profile for TIME AI 100 2024 special issue
9/9/24 - SF Standard - Dozens of AI workers turn against bosses, sign letter in support of Wiener AI bill
9/12/24 - The Verge - SAG-AFTRA and National Organization of Women urge Gavin Newsom to sign controversial AI safety bill
9/17/24 - Transformer (co-authored with Shakeel Hashim) - Lies and deception: Andreessen Horowitz’s SB 1047 campaign is as misleading as it gets
9/19/24 - The Thomson Reuters Foundation - Battle rages over US's first binding AI safety bill in California
9/25/24 - The Verge - Hollywood is coming out in force for California’s AI safety bill
9/29/24 - New York Times Opinion - Laws Need to Catch Up to Artificial Intelligence’s Unique Risks
9/30/24 - Jacobin - With Newsom’s Veto, Big Tech Beats Democracy
10/16/24 - The Guardian US Opinion - Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill?
In-depth threads
Given how fast everything was moving, I ended up doing a lot of analysis and writing on X (the everything app). Here are the greatest hits:
I evaluated all the claims made in the congressional letter against the bill.
Analysis + context on a letter from OpenAI whistleblowers.
A close read + analysis of Anthropic’s letter on the bill.
Fei-Fei Li’s relationship with Marc Andreessen and Andreessen Horowitz and how it might be affecting her commentary. (The same day Newsom vetoed SB 1047, he announced a new board of advisers on AI governance for the state. Li is the first name mentioned.)
Follow-on reporting and citations of my work
Probably the most fun and rewarding bit of this period was when deep learning “godfather” Geoffrey Hinton (now a Nobel laureate) quote-tweeted my SF Standard story on the AI employees who signed an open letter in favor of SB 1047, prompting a fight with Meta AI chief and fellow godfather, Yann LeCun. The spat was written up in Venture Beat. It was also not the first time my reporting started trouble in godfather paradise: the third godfather, Yoshua Bengio, shared my Jacobin cover story on AI existential risk on Facebook, eliciting an angry comment from, you guessed it, Yann LeCun.
LeCun doesn’t seem to hold a grudge, because he happily quote-tweeted a different thread I made about the recent turmoil at OpenAI (based on excellent reporting in the WSJ).
I was also pleased to see my SB 1047 reporting and analysis referenced in The New York Times, The Atlantic, Axios, The Brookings Institution, The LA Times, Fortune, and Vox.
Some other interesting links
The New Yorker - Silicon Valley, the New Lobbying Monster, by Charles Duhigg. A really fascinating, long-form profile of Chris Lehane, who seems to be a master of the dark arts of creative political lobbying. The focus is on crypto, and I think Duhigg underrates the significance of AI, but the piece is still worth a read. In August, OpenAI announced that Lehane is its new VP of Global Affairs. Lehane reportedly supported Sam Altman’s op-ed in The Washington Post: “Who will control the future of AI? A democratic vision for artificial intelligence must prevail over an authoritarian one.” I’ll have more to say on this idea in the future.
The Financial Times - AI may regret aping Wall Street’s regulatory resistance, by John Foley in the paper’s Lex column. The piece is about the SB 1047 veto and opens with this, “There are two ways to impose rules on an industry: before it breaks something, or after. Artificial intelligence is going down the latter route.” This may come back to bite them, Foley argues. The FT is maybe the closest thing to the voice of the long-term interests of the business world, so it was really interesting to see Foley co-sign most of the main arguments made by bill supporters.
Greece is cool
I just returned from a two week trip to Greece for my cousin’s wedding. (Congrats to Stephen and Ellie!) There, I was able to slowly learn how to think and talk about something other than SB 1047.
Greece is an incredible country that I knew far too little about. For example, did you know that after the Germans were pushed out of the country in 1944, the UK put Nazi collaborators in positions of power to fight Greek communists and restore the Greek monarchy? (See this for more context on the UK’s role in initial violence.) The country’s politics have been deeply affected by the resulting civil war and the complete lack of accountability for the Nazi collaborators, who were instead empowered to torture and kill Greek communists (who were themselves, the core of the resistance to the Nazis). The model of aligning with any anti-communist forces to contain the ideology and its followers set the stage for the rest of the Cold War.
Or that there is an anarchist neighborhood in Athens that housed refugees in abandoned houses, until the new right-wing government raided the squats and forced refugees into camps?
Or that classical Greek sculptures weren’t actually white, but instead brightly painted?
You probably already knew that it’s really beautiful (can confirm). Anyway, here’s a nice picture in case you didn’t believe me: