7 Comments

Great piece, thank you :) I think some of the confusion and side-switching from tech CEOs on AI x-risks might also be explained by another option - their different market strategies. From a great paper on open source AI entitled: "OPEN (FOR BUSINESS): BIG TECH, CONCENTRATED POWER, AND THE POLITICAL ECONOMY OF OPEN AI" (very recommended read in its entirety)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4543807

p. 17, quote - It’s worth noting here that we’ve described two instances of lobbying through two entities tightly tied to Microsoft, seemingly in different directions. GitHub’s argument, outlined in section one [that the open source AI is good and should not be regulated], is self interested because they (and Microsoft) rely on open source development, both as a business model for the GitHub platform and as a source of training data for profitable systems like CoPilot. This makes sense for them, while OpenAI is arguing primarily that models “above a certain threshold" should not be open — a threshold that they effectively set due to resource monopolies Microsoft benefits from. So, open source exceptions are good for them. Arguing that open sourcing their powerful models is dangerous also benefits OpenAI — this claim both reasserts the power of their models and allows them to conflate resource concentration with cutting edge scientific development.

Could it be we've seen similar dynamics of playing to both sides with AI x-risk? Even if, you're right that "If AI companies ever needed to rely on doomsday fears to lure investors and engineers, they definitely don’t anymore."

Expand full comment

I think this is a great point. It may be that when you're small and trying to attract talent and attention, speaking about x-risk makes sense, but when you get big enough, more traditional corporate pressures begin to dominate. In other words, your strategy can change over time as your position changes.

Expand full comment

Satya Nadella in the interview from February 2023 https://www.youtube.com/watch?v=YXxiCwFT9Ms 16:55

Interviewer: And then I have to ask and I sound a little bit silly. I feel a little bit silly even contemplating it, but some very smart people ranging from Stephen Hawkins [sic!] to Elon Musk to Sam Altman, who I just saw in the hallway here, your partner at OpenAI, have raised the specter of AI somehow going wrong in a way that is lights out for humanity. You're nodding your head. You've heard this too.

Nadella: Yeah.

Interviewer: Is that a real concern? And if it is, what are we doing?

Nadella: Look, I mean, runaway AI, if it happens, it's a real problem. And so the way to sort of deal with that is to make sure it never runs away. And so that's why I look at it and say let's start with-- before we even talk about alignment and safety and all of these things that one should do with AI, let's talk about the context in which AI is used. I think about the first set of categories in which we should use these powerful models are where humans unambiguously, unquestionably are in charge. And so as long as we sort of start there, characterize these models, make these models more safe and, over time, much more explainable, then we can think about other forms of usage, but let's not have it run away.

Expand full comment

I think this is a great essay. I have found it very confusing to know who is conflicted, if there could be a conspiracy etc. I think that you should have mentioned Andrew Ng. I also think you could be more clear in parceling out the different conspiracy theories (overhyping for investment/overuse vs regulatory capture). Another argument against the conspiracy theories is that many doomers create new labs increasing competition. If it was a well organised conspiracy to kill open source then they should all stick together in the same lab lol. I guess maybe it this is a weak argument. It could still be a conspiracy but Anthropic, DeepMind and Musk couldn't get along. And OpenAI defected completely recently. Your essay seems to focus on general big tech executives. But I thought the conspiracy theories are focused on the leadership of Anthropic and DeepMind. I don't know about xAI.

Expand full comment

Thanks Dean! I may write a separate piece about what's going on with the top-three AI companies, because I agree that that's where a lot of the fuel for the hype narrative comes from (and where the case is strongest).

Ng is also a good flag. He has been one of the loudest promoters of the hype narrative, while also being a conflicted and inaccurate at best critic of AI safety policy: https://x.com/ketanr/status/1798794760087539841

I updated the compendium with the latest statements from Amodei and Altman, which further support my argument that people tone down the x-risk stuff as their companies get bigger: https://garrisonlovely.substack.com/i/152552592/anthropic

Expand full comment

Thanks for the support! Folks can read about how we utilize AI/ML in our work at GiveDirectly.org/ai

Expand full comment