A Compilation of Tech Executives' Statements on AI Existential Risk
Documenting industry leaders’ thoughts on whether AI might kill everyone
I wanted to collect the perspectives of tech executives on AI existential risk. This grew out of my recent post “Is the AI Doomsday Narrative the Product of a Big Tech Conspiracy.”
Right now, the focus is on the statements of leaders of Big Tech companies, whose perspectives have been underreported, in my view. In the future, I might add more from prominent AI investors, like Marc Andreessen and Peter Thiel, as well as from the leaders of the top-three AI companies (OpenAI, Anthropic, and Google DeepMind), though their views have been more extensively covered.
If I missed anything, let me know via Substack or email (tgarrisonlovely [at] gmail [dot] com).
Microsoft
Microsoft CEO Satya Nadella spoke to WIRED in June 2023. The interviewer asked: “OpenAI CEO Sam Altman believes that [superintelligence] will indeed happen. Do you agree with him that we're going to hit that AGI superintelligence benchmark?”
Nadella replied:
I'm much more focused on the benefits to all of us. I am haunted by the fact that the industrial revolution didn't touch the parts of the world where I grew up until much later. So I am looking for the thing that may be even bigger than the industrial revolution, and really doing what the industrial revolution did for the West, for everyone in the world. So I'm not at all worried about AGI showing up, or showing up fast. Great, right? That means 8 billion people have abundance. That's a fantastic world to live in.
February 2023, CBS Mornings interview (h/t to Metastable for the tip and transcript):
Interviewer: And then I have to ask and I sound a little bit silly. I feel a little bit silly even contemplating it, but some very smart people ranging from Stephen Hawkins [sic!] to Elon Musk to Sam Altman, who I just saw in the hallway here, your partner at OpenAI, have raised the specter of AI somehow going wrong in a way that is lights out for humanity. You're nodding your head. You've heard this too.
Nadella: Yeah.
Interviewer: Is that a real concern? And if it is, what are we doing?
Nadella: Look, I mean, runaway AI, if it happens, it's a real problem. And so the way to sort of deal with that is to make sure it never runs away. And so that's why I look at it and say let's start with-- before we even talk about alignment and safety and all of these things that one should do with AI, let's talk about the context in which AI is used. I think about the first set of categories in which we should use these powerful models are where humans unambiguously, unquestionably are in charge. And so as long as we sort of start there, characterize these models, make these models more safe and, over time, much more explainable, then we can think about other forms of usage, but let's not have it run away.
Meta
In a 2016 interview, Mark Zuckerberg was asked if fear of AI takeover was “valid” or “hysterical.” His reply: “more hysterical.” He also said that, “I think that the default is that all the machines that we build serve humans so unless we really mess something up I think it should stay that way.” Zuckerberg goes on to say that:
I think that along the way, we will also figure out how to make it safe. The dialogue today kind of reminds me of someone in the 1800s sitting around and saying: one day we might have planes and they may crash. Nonetheless, people developed planes first and then took care of flight safety. If people were focused on safety first, no one would ever have built a plane.
This fearful thinking might be standing in the way of real progress. Because if you recognize that self-driving cars are going to prevent car accidents, AI will be responsible for reducing one of the leading causes of death in the world. Similarly, AI systems will enable doctors to diagnose diseases and treat people better, so blocking that progress is probably one of the worst things you can do for making the world better.
In an April 2024 interview, Zuckerberg said:
In terms of all of the concerns around the more existential risks, I don't think that anything at the level of what we or others in the field are working on in the next year is really in the ballpark of those types of risks.
Relevant passages from Cade Metz’s Genius Makers:
It was the first time the two had met. Zuckerberg invited Musk to his white clapboard home under its leafy canopy in Palo Alto, hoping to convince the South African entrepreneur that all this talk about the dangers of superintelligence didn’t make much sense. He had balked when the founders of DeepMind insisted they wouldn’t sell their lab without a guarantee that an independent ethics board would oversee their AGI, and now, as Musk amplified this message across television and social media, he didn’t want lawmakers and policy makers getting the impression that companies like Facebook would do the world harm with their sudden push into artificial intelligence. To help make his case, he also invited Yann LeCun, Mike Schroepfer, and Rob Fergus, the NYU professor who worked alongside LeCun at the new Facebook lab. The Facebookers spent the meal trying to explain that Musk’s views on AI had been warped by a few misguided voices that were very much in the minority. The philosophical musings of Nick Bostrom, Zuckerberg and his fellow Facebookers said, were in no way related to what Musk had seen at DeepMind or inside any other AI lab. A neural network was still a long way from superintelligence. DeepMind built systems that optimized point totals inside games like Pong or Space Invaders, but they were useless elsewhere. You could shut the game down just as easily as you could a car. But Musk was unmoved. The trouble, he said, was that AI was improving so quickly. The risk was that these technologies would cross the threshold from innocuous to dangerous before anyone realized what was happening. He laid down all the same arguments he made in his tweets and TV spots and public appearances, and as he talked, no one could quite tell if this was really what he believed or if he was just posturing, with an eye toward some other endgame. “I genuinely believe this is dangerous,” he said.
Amazon
In 2018, Jeff Bezos said, “The idea that there is going to be a general AI overlord that subjugates us or kills us all, I think, is not something to worry about. I think that is overhyped.” In December 2023, Bezos appeared on Lex Fridman’s podcast and had this to say about AI:
I’m very optimistic about this. So even in the face of all this uncertainty, my own view is that these powerful tools are much more likely to help us and save us even than they are to on balance hurt us and destroy us.
Nvidia
Relevant passages from the Stephen Witt’s November 2023 profile of Jensen Huang in the New Yorker:
The evening before our breakfast, I’d watched a video in which a robot, running this new kind of software, stared at its hands in seeming recognition, then sorted a collection of colored blocks. The video had given me chills; the obsolescence of my species seemed near. Huang, rolling a pancake around a sausage with his fingers, dismissed my concerns. “I know how it works, so there’s nothing there,” he said. “It’s no different than how microwaves work.” I pressed Huang—an autonomous robot surely presents risks that a microwave oven does not. He responded that he has never worried about the technology, not once. “All it’s doing is processing data,” he said. “There are so many other things to worry about.”
Following the interview, Huang took questions from the audience, including one about the potential risks of A.I. “There’s the doomsday A.I.s—the A.I. that somehow jumped out of the computer and consumes tons and tons of information and learns all by itself, reshaping its attitude and sensibility, and starts making decisions on its own, including pressing buttons of all kinds,” Huang said, pantomiming pressing the buttons in the air. The room grew very quiet. “No A.I. should be able to learn without a human in the loop,” he said. One architect asked when A.I. might start to figure things out on its own. “Reasoning capability is two to three years out,” Huang said. A low murmur went through the crowd.
Nvidia executives were building the Manhattan Project of computer science, but when I questioned them about the wisdom of creating superhuman intelligence they looked at me as if I were questioning the utility of the washing machine. I had wondered aloud if an A.I. might someday kill someone. “Eh, electricity kills people every year,” Catanzaro said. I wondered if it might eliminate art. “It will make art better!” Diercks said. “It will make you much better at your job.” I wondered if someday soon an A.I. might become self-aware. “In order for you to be a creature, you have to be conscious. You have to have some knowledge of self, right?” Huang said. “I don’t know where that could happen.”
Google
In an interview largely buried in the paywalled “Overtime” section of an April 2023 episode of 60 Minutes, CBS News’ Scott Pelley speaks with Google CEO Sundar Pichai, who says:
I’ve always thought of AI as the most profound technology humanity is working on. More profound than fire or electricity or anything that we’ve done in the past… We are developing technology which, for sure, one day, will be far more capable than anything we’ve ever seen before.
Later on, there’s this notable exchange:
Pelley: What are the downsides?
Pichai: I mean the downside is, at some point, that humanity loses control of the technology it’s developing
Pelley (voice over): Control, when it comes to disinformation and generating fake images.
Former Google CEO Eric Schmidt has publicly expressed concern that AI is an “existential risk,” which he defined as “many, many, many, many people harmed or killed,” at a Wall Street Journal event in May 2023. However, he has emphasized “misuse” risk, i.e. a bad actor uses AI to cause harm, rather than “misalignment” risk, i.e. humanity loses control of a powerful AI.
In a November forum at the Harvard Institute of Politics, Schmidt reportedly said that the US is falling behind China in the AI race — an abrupt departure from his May assessment to Bloomberg that the US is “way ahead of China” on AI (“two or three years” to be precise).
By automating AI research, AGI could give a first-mover “a very, very profound advantage,” Schmidt said at Harvard. “It does matter who’s first,” he said. “Even small differences, like a few months, can get amplified.”
Schmidt also said he has “historically been a techno-optimist” about AI development and emphasized the AI’s positive potential, according to a Harvard Crimson story on the forum. He has discussed the need for regulation saying at the WSJ event, “If what you’re doing can kill a million people, right, it should be regulated.” At the Harvard event, Schmidt said that there is room for agreement between the US and China on AI, “The most obvious restriction that should be agreed to is the use of automatic weapons systems,” he said. “There is a scenario where the AI system could decide on its own to launch a war, especially if it had direct access to weapons.”
Schmidt has extensive ties to national security and defense tech groups, having led the National Security Commission on Artificial Intelligence and the Pentagon’s Defense Innovation Board.
In a November opinion essay on AI in the Economist, Schmidt concluded with this:
In the decades ahead AI will address humanity’s greatest challenges and opportunities, perhaps even resetting a baseline of human wealth and well-being. Just that possibility itself demands that we pursue it.
Google co-founder, Sergey Brin, has not appeared to make any public statements on AI x-risk.
According to multiple independent sources, Google’s other co-founder, Larry Page, thinks AI could kill us all — he just doesn’t seem to care. In private settings, he’s reportedly dismissed efforts to prevent AI-driven extinction as “speciesist” and “sentimental nonsense,” viewing superintelligent AI as “just the next step in evolution.”
Anthropic
Last summer, The New York Times called Anthropic “the White-Hot Center of AI Doomerism,” but even its CEO, Dario Amodei, has presented a sunnier perspective as his company has grown.
In October 2023, Amodei was asked about his percentage chance of doom (AKA p(doom)) on a podcast. Here’s his reply:
I think I’ve often said that my chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10 to 25%, when you put together the risk of something going wrong with the model itself, with something going wrong with human, people, or organizations or nation-states misusing the models, or it kind of inducing conflict among them, or just some way in which society can’t handle it.
But here’s an exchange from November 2024 (h/t to Joshua Turner for the find and Liron Shapira for the clip):
Interviewer: What’s your p(doom) these days?
Amodei: I don't really like--
Interviewer: You've learned your lesson on this one.
Amodei: I never liked those words. I think they're kinda weird. My view is just that like, look, we should measure for risks as they come up. And in the meantime, we should get all the economic benefits that we could get, and we should find ways to measure the risks that are effective, but are minimally disruptive to the amazing economic process that we see going on now, that the last thing we wanna do is slow down.
Additionally, in October 2024, Amodei published a 14,000 word essay called “Machines of Loving Grace” that outlined how incredible the world could be if things go well with AI, partly in a conscious attempt to respond to his reputation as a “doomer.”
Of everyone mentioned so far, excluding Musk, I think Amodei is the most genuinely worried about the risks from AI. So it’s notable that even he is changing his public emphasis.
OpenAI
Sam Altman has a long track record of frankly (sometimes glibly) discussing x-risk from AI. However, as OpenAI has gotten closer to profitability and Altman closer to power, the CEO has actually begun downplaying the idea of AI-driven extinction.
Here’s me in Jacobin in January 2024:
One understandable source of suspicion is that Sam Altman is now one of the people most associated with the existential risk idea, but his company has done more than any other to advance the frontier of general-purpose AI.
Additionally, as OpenAI got closer to profitability and Altman got closer to power, the CEO changed his public tune. In a January 2023 Q and A, when asked about his worst-case scenario for AI, he replied, “Lights out for all of us.” But while answering a similar question under oath before senators in May, Altman doesn’t mention extinction. And, in perhaps his last interview before his firing, Altman said, “I actually don’t think we’re all going to go extinct. I think it’s going to be great. I think we’re heading towards the best world ever.”
Altman implored Congress in May to regulate the AI industry, but a November investigation found that OpenAI’s quasi-parent company Microsoft was influential in the ultimately unsuccessful lobbying to exclude “foundation models” like ChatGPT from regulation by the forthcoming EU AI Act. And Altman did plenty of his own lobbying in the EU, even threatening to pull out of the region if regulations became too onerous (threats he quickly walked back). Speaking on a CEO panel in San Francisco days before his ouster, Altman said that “current models are fine. We don’t need heavy regulation here. Probably not even for the next couple of generations.”
Altman’s comments were actually consistent with a little-discussed May 2023 blog post from Humanloop, a “developer platform for LLM applications.” A Humanloop blog post summarized a conversation with Altman, writing, “While Sam is calling for regulation of future models, he didn’t think existing models were dangerous and thought it would be a big mistake to regulate or ban them.” The conversation may have been too candid for Altman’s preferences, as the post content was replaced with this message: “This content has been removed at the request of OpenAI.” (Thankfully, not before archives saved it for posterity.)
In February 2015, nearly a full year before OpenAI was formally founded, Altman opened a personal blog post with this stark sentence: “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could.”
In June 2015, as he was co-founding OpenAI, Altman said, “AI will probably, most likely lead to the end of the world, but in the meantime, there’ll be great companies.”
Elon Musk’s dropped lawsuit against OpenAI alleges that Altman emailed Musk in 2015 that, “Obviously we’d comply with/aggressively support all regulation.”
In a September 2024 blog post called “The Intelligence Age,” Altman doesn’t mention extinction at all. The only risk he names at all is the changes to employment, which he downplays:
most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today).
In December 2024, Altman was interviewed at The New York Times DealBook summit:
Altman: My guess is that we will hit AGI sooner than most people in the world think and it will matter much less. And a lot of the safety concerns that we and others expressed actually don’t come at the AGI moment. It’s like, AGI can get built, the world goes on mostly the same way, the economy moves faster, things grow faster, but then there is a long continuation from what we call AGI to what we call superintelligence.
OpenAI has a strong incentive to declare AGI as early as possible, per this reporting from the Times in October 2024:
Oddly, that could be the key to getting out from under its contract with Microsoft. The contract contains a clause that says that if OpenAI builds artificial general intelligence, or A.G.I. — roughly speaking, a machine that matches the power of the human brain — Microsoft loses access to OpenAI’s technologies.
The clause was meant to ensure that a company like Microsoft did not misuse this machine of the future, but today, OpenAI executives see it as a path to a better contract, according to a person familiar with the company’s negotiations. Under the terms of the contract, the OpenAI board could decide when A.G.I. has arrived.
Also in December 2024, Altman was interviewed on Fox News:
Interviewer: “A lot of people who don’t understand AI, and I would put myself in that category, have got a basic understanding, but they worry about AI becoming sentient, about it making autonomous decisions, about it telling humans you’re no longer in charge?”
Altman: “It doesn’t seem to me to be where things are heading…is it conscious or not will not be the right question, it will be how complex of a task can it do on its own?”
Interviewer: “What about when the tool gets smarter than we are? Or the tool decides to take over?”
Altman: “I think tools in many senses are already smarter than we are. I think that the internet is smarter than you or I, the internet knows a lot of things. In fact, society itself is vastly smarter and more capable than any one person. I think we’re already good at working with tools, institutions, structures, whatever you want to call it, that are vastly more capable than one person and as long as we have a reasonably level playing field where one person or one company has vastly more power than anybody else, I think we know how to deal with that.”