22 Comments
Oct 24Liked by Garrison Lovely

What I see here is similar to what my co-author and I discovered when we wrote our well-received account of the Deepwater Horizon disaster. In the late 1990s the head of BP, to the applause of McKinsey and Stanford Business School (two institutions that should never be let near a high-consequence engineering project) made BP the most efficient producer of oil in the Gulf of Mexico. He did this by stripping all of the redundancy out of the organization. All that was left were forward-looking, "get 'er done, son" elements and there was nobody left who could say "no" or even alert upper management to potential for disaster. And disasters followed: two million barrels of oil spilled on the Alaska tundra, 15 dead in the Texas City refinery explosion, and 11 dead and the largest man-made ecological disaster in the history of the U.S. when the Macondo well blew out.

I see OpenAI reconfiguring itself in the same way, applauded by the same finance-first culture for the same reasons, and running the risk of the same kind of multiple catastrophe. Fasten your seat belts, folks, it's going to be a bumpy ride.

Expand full comment
author

Wow. Consistent with my direct and indirect experience with (not in) the outfit. Hope you're doing OK now.

Expand full comment

Yes, but there is a difference. BP was highly profitable to begin with, and OpenAI is hemorhagging money. One acted from greed, one arguably from a mix of fear and hope. This is not to say that either made good decisions, but to understand the situation better.

Expand full comment

Good clarification, thanks. I was focused on the evolution, not the motive.

Expand full comment

Yes, it is a good analogy.

Expand full comment

“OpenAI, as we knew it, is dead.”

I'll suggest instead that OpenAI, as we thought we knew it, was never alive. It was a dream. The dream, shared by employees, investors, and the public, was that software more capable than existing search engines, conversational agents, and generative tools would quickly find sufficient revenue streams to support an ongoing commitment to focus on safe and secure accomplishment rather than on profitability. Those revenue streams have not materialized, the dream dematerialized, and here we are, with no need to assume bad intentions anywhere.

Expand full comment

There is however a market for OpenAI's tech, and will likely increase as the the tools get better. It just won't be quick.

Expand full comment
Oct 24Liked by Garrison Lovely

great piece!

"And we should think harder about how China is likely to react if the “free world” moves mountains to kneecap their AI progress."

but we already have...

Expand full comment
author

Thank you! And good point, though I think there's still a world of difference between e.g. export controls and the kind of thing Dario proposes: https://x.com/HaydnBelfield/status/1845197740025970875

Expand full comment

In a world where a large percentage of AI-related research papers are published by Chinese citizens or Chinese emigrants, it is odd to maintain this policy of coercion towards China.

Besides, such restrictions only achieve one thing: They accelerate China's investments in R&D that will ultimately make it unnecessary for them to import key technologies from outside China. Granted, it is naive to assume a country of 1.5 billion souls cannot achieve anything their leaders edict as a priority.

Expand full comment

Perhaps OpenAI is less concerned with AGI safety nowadays because they are starting to realise that AGI is still far away. It may be like worrying about the safety of nuclear fusion power plants.

Expand full comment

The world isn't ready for AGI? That ain't a bad thing. Why waste time and resources getting prepared for something that never arrives?

Of more pressing interest is whether the world is ready for high quality, high efficiency fraud and spam and garbage generation tools that bombard us with fake versions of what that until recently could only be created by people. Cos that's what's being delivered.

Brundage sounds like a well meaning person dedicated to his cause. But he's not the kind of person that policymakers and journalists and the general public should be listening to right now.

Expand full comment

It is the end of the era of smoke and mirrors, and the beginning of era of realism.

AI is business automation. Tools incrementally smarter than what came before. AGI is not nigh. Doom is not nigh. Regulation is premature too.

Waymo's cars taught us two lessons. Short-term, the hype people are wrong. Long-term, the skeptics are wrong.

Next year will be about agents. Those will take time to get right.

In a few years, the machinery will be much better, and both regulation and concerns about doom and unemployment will have more substance.

Expand full comment
Oct 24·edited Oct 24

Brundage is the real deal. Luckily, it's extremely unlikely IMO that any system that relies on an LLM as the basis for its cognition will achieve human-level AGI. Therefore we still have some breathing space (at least a couple of decades) in which to resolve the tension between profit and alignment.

Expand full comment

LLM will not be able to achieve feats of human cognition since the most complex models we have in our head can't be represented as a combination of language, tool use, symbolic reasoning, and physics-based simulations. We can think deep, and I have no idea how to represent or replicate that.

LLM-based agents will go a long way in semi-mechanically performing loosely structured but rather predictable work though. That will only require incremental improvements to existing architectures.

Expand full comment

Or: we do not need AGI/ASI alignment because we will not have AGI/ASI. Like we do not need a time travel police.

Expand full comment

"I would like to see academics, companies, civil society, and policymakers work collaboratively to find a way to demonstrate that Western AI development is not seen as a threat to other countries’ safety or regime stability, so that we can work across borders to solve the very thorny safety and security challenges ahead." This approach has been proven wrong in the past 40 years and we should not be fooled again by this mindset.

Expand full comment

So I passed off these screenshots TO MY super secret god tier AI to summarize, and I got this strange, result. Is there a hidden message in it?

"At first, I was like, hey, doesn't making this a for profit company kind of contradict the reason I came here, but then I came around to it, because like, if I said otherwise or contradicted anything OpenAI says, I'd stand to lose a lot of valuation. Besides, I think we have enough gaps and times that as long as I can make this grift my career, we'll be solve it somehow together." -- Extremely overpaid guy who self assigns responsibility over existential risk

Expand full comment

On the up-side, they have all joined the "Nuclear Fusion Readiness Team" ;>

Expand full comment

On the up-side, they have all joined the "Nuclear Fusion Readiness Team" ;>

Expand full comment

"He thinks the world is not ready for powerful AI." He may alternatively think the people at OpenAI are not the people needed to build AGI, having little or no experience in fields that require high reliability (the tool is not suited to high reliability, so it is not surprising that the OpenAI people have no experience). The premise that an understanding of the meaning of words , the logic that binds English together, and how to integrate images (photos and diagrams) and video - all of these are not necessary, dooms OpenAI to limited or toy applications and future irrelevance.

Expand full comment