The news startup Semafor, which often does excellent reporting, published an article yesterday on the relationship between effective altruism and OpenAI’s bungled firing of CEO Sam Altman.
The journalists argue that EA’s inexperience and insularity played a role in Altman’s bungled firing:
Among the issues that the OpenAI saga revealed is EA’s strange detachment from practical expertise. As this week painfully showed, its acolytes seem to be more effective on message boards than in boardrooms.
Core to the article’s thesis is the OpenAI nonprofit board’s relationship to EA:
Three of the six seats on OpenAI’s board are occupied by people with deep ties to effective altruism: think tank researcher Helen Toner, Quora CEO Adam D’Angelo, and RAND scientist Tasha McCauley. A fourth member, OpenAI co-founder and chief scientist Ilya Sutskever, also holds views on AI that are generally sympathetic to EA.
But is this true?
Toner and McCauley each have extensive connections to EA, as Semafor highlights in the article. But the only evidence they cite for Adam D’Angelo’s “deep ties” to EA is that he worked with Dustin Moskovitz at Facebook, sits on the board of Moskovitz’s company Asana, and “has repeatedly echoed similar concerns about AI espoused by EA” (with no links to support this).
Moskovitz funds Open Philanthropy (OP), an EA-aligned foundation and the biggest AI safety funder. D’Angelo started a multibillion dollar Silicon Valley-based tech company, so it’s hard to say that he’s inexperienced and isolated. But none of the long-time EAs I asked think that D’Angelo is an EA. His name has also never even been mentioned on the EA forum, whereas Helen Toner and Tasha McCauley’s names show up 87 and 24 times, respectively.
Semafor also writes that OpenAI co-founder and chief scientist Ilya Sutskever, “holds views on AI that are generally sympathetic to EA.” So does anyone who thinks that AI could pose an extinction risk holds views that are “generally sympathetic to EA”? Alan Turing also worried about superintelligent AI, so is he “generally sympathetic to EA”? Stephen Hawking?
However, given that Toner and McCauley’s EA ties actually are deep, Semafor is correct to contest the bullshit claim made by OpenAI prior to the firing that “none of their board members are effective altruists.”
Before he was arrested last year, Bankman-Fried surrounded himself with fellow effective altruists as well, failing repeatedly to follow widespread, established norms about evaluating the risks of volatile investments.
To support their thesis, Semafor also bring up Sam Bankman-Fried’s (SBF) significant role in EA, but don’t disclose his significant role in Semafor. SBF was once the startup’s biggest outside investor, having put in $10M of $25M of its pre-launch raise. They did, however, make sure to disclose to readers that a prediction market used by EAs was “backed by Bankman-Fried and other EA donors.”
Does EA have a “strange detachment from practical expertise”?
OpenAI’s nonprofit board, per their unusual charter, has a fiduciary duty to “humanity,” rather than investors or employees. Whether or not the decision to fire Altman was in the best interest of humanity, its execution was botched. Is EA to blame?
Half of the people who chose to fire Altman hold c-suite positions at multi-billion dollar companies they co-founded, and their ties to EA, as we’ve seen, are actually quite shallow.
Is the idea that the two EAs on the board used their insularity and inexperience to somehow infect the others, turning them into incompetent recluses?
Primarily blaming EA seems like a stretch.
With all that said, I have long felt that EA would benefit from greater respect for experience and engagement with the wider world.
EA is a young movement—both in terms of the age of its members and its organizations. It’s brought some welcome, fresh thinking to philanthropy. The community’s willingness to question prevailing wisdom has been a source of strength, but has led to some pathologies as well. Once you’re trusted by the powers that be, by convincing them of your virtue, smarts, and “epistemics” (i.e. ability to evaluate evidence), you’ll be granted many opportunities.
One of the common critiques of EA, particularly from the left, is that it focuses too much on individual contributions. EA celebrates figures like Stanislov Petrov, who likely averted global nuclear war by overriding a Soviet false alarm, and Norman Borlaug, who spear-headed the green revolution that fed billions.
There is inspirational power in these stories, but I think the left critique has merit. In my view, the best way to reliably and sustainably direct people’s talent and virtue to improve the world is through building robust institutions that coordinate collective action.
FTX had essentially no institutional safeguards and a hero-worshiping culture around SBF, and we saw how that turned out.
If you put too many of your eggs in the “heroic individuals” basket, you can end up lionizing a con artist like SBF. If your strategy involves giving your heroes unfettered and unaccountable power, you risk empowering and vouching for a monster—unless you can perfectly judge character. If instead you invest in institutions that don’t rely on any one individual to succeed, you can afford to be wrong about some number of people without tanking your entire movement.
But institutional design isn’t easy. And any ambitious strategy to improve the world also needs a materialist analysis, something else EA has historically lacked.
OpenAI’s humanity-first nonprofit structure was not robust to a charismatic founder backed by a nearly $3 trillion company. Another fact that didn’t help the board: the firing derailed a planned sale of employee-owned stock at a company valuation of $86 billion.
In an incident first reported by Semafor, EA-aligned early employees at Alameda Research tried to force SBF out of the company in April 2018 because of his dishonesty and wanton risk-taking. They lacked the power to boot SBF, and instead resigned en masse. At the time, SBF was a trustee of the US branch of the Centre for Effective Altruism (CEA). The resigning employees warned CEA that SBF was not to be trusted, but he remained on the board until 2019.
Ironically, EA is now taking heat for an almost inverted situation. Unlike SBF, Altman was fettered by and accountable to OpenAI’s nonprofit board, at least in theory. While we still don’t really know why, a majority of the board, including a colleague of eight years, lost confidence in Altman and tried to disempower him. With Altman back on top at OpenAI and the EAs jettisoned from the board, they have largely failed.
This actually wasn’t Altman’s first time being publicly fired. In 2019, Y Combinator founder Paul Graham removed Altman from the incubator’s helm over concerns that Altman was prioritizing his own interests over those of the organization, echoing those raised by OpenAI’s board last week.
When it comes to AI, the stakes might extend far beyond any one company or industry—something Altman has been quick to recognize. In May, he and hundreds of AI researchers and prominent figures signed a letter stating:
For all of our sakes, let’s hope Altman’s risk management is better than SBF’s.