My Disagreements With "Doing EA Better"
Guys, stop having new drama, I just came up with my opinions on the last one.
I disliked the post Doing EA Better an immense amount given that I agree with it about 30% and consider it to be directionally correct. Effective altruists should defer less to the movement’s leaders. The movement does rely too much on young, inexperienced, brilliant generalists, and should reach out more to subject-matter experts. Effective altruism needs to be more diverse both intellectually and in terms of marginalized-group membership. The Center for Effective Altruism should have les power and should be more accountable to the effective altruist community as a whole. The effective altruist movement should diversify funding away from Dustin Moskovitz.
Doing EA Better is quite long and covers an enormous amount of territory. In this post, I will outline a few of my objections, which will hopefully give a general sense of my objections to the overall post. The fact that I didn’t criticize something shouldn’t be taken as evidence that I agree with it; I’m just trying to keep the post from being too long.
Homogeneity of Effective Altruism
The paragraph that first made me snarl was:
The EA community is notoriously homogenous, and the “average EA” is extremely easy to imagine: he is a white male[9] in his twenties or thirties from an upper-middle class family in North America or Western Europe. He is ethically utilitarian and politically centrist; an atheist, but culturally protestant. He studied analytic philosophy, mathematics, computer science, or economics at an elite university in the US or UK. He is neurodivergent. He thinks space is really cool. He highly values intelligence, and believes that his own is significantly above average. He hung around LessWrong for a while as a teenager, and now wears EA-branded shirts and hoodies,1 drinks Huel, and consumes a narrow range of blogs, podcasts, and vegan ready-meals. He moves in particular ways, talks in particular ways, and thinks in particular ways. Let us name him “Sam”, if only because there’s a solid chance he already is.[10]
It is true that effective altruism is very homogeneous, and this is a problem. I am 100% behind inclusivity efforts.2 And I praise the authors for their observation that inclusivity goes beyond the standard race/class/gender to matters of culture and intellectual diversity.
However, I think that this subject should be addressed with care. When you’re talking about homogeneity, it’s important to acknowledge effective altruist members of various groups underrepresented in effective altruism. Very few things are more unwelcoming than “by the way, people like you don’t exist here.”
Further, the description itself is offensive in many ways. Describing the average member of a movement with as many Jews as effective altruism as “culturally Protestant” is quite anti-Semitic. The authors fail to mention queerness and transness, probably because it would be a bit inconvenient for their point to mention that an enormous number of EAs are bisexual and trans women are represented in EA at something like forty times the population rate. The average effective altruist is “neurodivergent” which… is a bad thing, apparently? We need to go represent the neurotypical point of view, which is inescapable everywhere else in politics, corporations, and the media? The vague term “neurodivergence” actually understates the scale of effective altruism’s inclusion problem. Effective altruism is inclusive of a relatively narrow range of neurodivergences: it’s strikingly unwelcoming of, say, non-Aspie autistics.34
Finally, some of this homogeneity is about things that are… true? I realize it’s rude to say so, but consuming animal products in the vast majority of situations in fact supports an industry which tortures animals5 and God in fact doesn't exist.6 I am glad that the effective altruism movement has reached general consensus on these things! Effective Altruist Political Ideology7 is hardly correct in every detail, but I don't think it's a bad sign if a movement broadly agrees on a lot of political issues. Some political policies are harmful! Other policies make things better!
Further, perhaps I am interpreting the authors uncharitably, but I suspect that when they say “there should be more diversity of political opinions” they mean “there should be more leftists.” I am just ever-so-slightly suspicious that if my one-sided archnemesis Richard Hanania showed up with a post about how the top cause area is fighting wokeness, the authors would not be happy with this and in fact would probably start talking about racism. Which is fine! I too agree that fighting wokeness is not the top cause area! But in this case your criticism is not “effective altruism should be more inclusive of different political views,” it’s “effective altruism’s political views are wrong and they should have different, correct ones,” and it is dishonest to smuggle it in as an inclusivity thing.
The Animals And Global Poverty Cause Areas Exist
While Doing EA Better repeatedly claims to be talking about “the effective altruism movement,” it implicitly characterizes effective altruism as a small group of anti-existential-risk advocates and longtermist thinkers. Are you guys… maybe missing somebody? The majority of the money moved by the effective altruist movement, perhaps?
This is particularly outrageous because effective animal advocacy has a bunch of the heterogeneity they claim to care about. Effective animal advocates are far more likely to be female than longtermist or anti-existential-risk effective altruists are. The vast majority of effective animal advocates come from a background other than “upper-middle-class or upper-class Anglosphere person who did one of four majors at an Ivy or Oxbridge.” The effective animal advocacy movement has a long tradition of outreach to people in low- and middle-income countries. If you look at any grant list, at least a quarter of them are like “$100,000 for an animal advocacy movement in Cameroon.” And the investment is starting to pay off. Many effective animal advocacy charities, such as FIAPO and Dharma Voices for Animals, exclusively work and hire in low- and middle-income countries, especially in Asia. Other organizations, such as the Open Wing Alliance, are strongly international. And although in my opinion our outreach towards people of color in the Anglosphere has been less successful, it’s still a major area of interest. Instead of reinventing the wheel, maybe you can ask us what we’re doing right?
Further, Doing EA Better criticizes the effective altruist tendency towards excessive quantification of things you can’t put numbers on. In my opinion, the effective animal advocacy community has successfully resisted this temptation. Animal Charity Evaluators doesn’t provide a numerical cost-effectiveness estimate of its charities because their estimates are so uncertain that a numerical cost-effectiveness estimate is actively misleading. Papers about which animals are sentient generally either avoid including a subjective probability that a particular species is sentient or place it in an appendix. Again, you don’t have to forge your way into the uncharted wastes to figure out how to avoid inappropriate quantification. You can just ask!
A Lot Of Your Best Critiques Are Shallow Critiques
Doing EA Better distinguishes between “shallow critiques”—”technical adjustments to generally-accepted structures” written in EA jargon and which are “not critical of capitalism”8—and "deep critiques" which criticize EA orthodoxy, are critical of powerful EA figures, or which have different political assumptions than the Generic EA Political Ideology.9 Effective altruism responds well to shallow critiques and poorly to deep critiques.
But Doing EA Better spends an enormous amount of space on exactly the shallow critiques that they say that effective altruism responds well to! To be clear, a lot of their critiques are good. Effective altruist syllabuses should contain more thinkers outside of effective altruism? Absolutely, I agree entirely. Effective altruists dismiss climate change as “non-neglected” because the overall concept has a lot of money spent on it, even though many issues (like tail risks from climate) are in fact neglected? I’ve said it myself a thousand times. Effective altruists should focus more on increasing resiliency against existential risk rather than preventing individual existential risks? Seems plausible! Maybe we should donate more to ALLFED and focus on preventing great power war!
I am not sure why these points are in the post though, since they’re trying to focus on deep critiques. To be honest, I kind of feel like they’re borrowing plausibility from their correct shallow critiques for their deep critiques, which are mostly stupid.
Career Consequences For Criticism Are Hard To Avoid
Effective altruists try very hard to be accepting of criticism. Doing EA Better cites the (regrettable) experiences of Zoe Cremer and Luke Kemp, who feared career consequences from criticizing effective altruism. But in their own post, Doing EA Better links to three of the most prominent funders in effective altruism publicly committing to not discriminating against Cremer and Kemp for their criticism. The comment thread also includes Will MacAskill actively soliciting grant proposals for more work along those lines. Try to imagine that happening in any other group. If you tried to do work as critical as Cremer and Kemp’s in trans advocacy, you would not get people publicly committing to support you. You would get called a transmisogynist fascist who wants all trans people to die.
Similarly, I empathize with Cremer and Kemp about having lost friends and mentors. But the effective altruist community is ridiculous in terms of its members’ willingness to be friends with people they disagree with. I received mentorship from a negative utilitarian despite my strong opposition to negative utilitarianism. My coparent thinks that painless sudden human extinction is fine. One of my closest friends has sufficiently short AI timelines that he thinks all my work is approximately valueless. People who don't think animals matter still ask earnestly if I'm okay with them ordering meat. It even extends to non-effective-altruist issues. The reason that there are racists, transphobes, etc in effective altruism, while these people are mostly excluded from other equally liberal movements, is that effective altruists don't want to stop being friends with other people over their beliefs. You can certainly make a case that effective altruism has gone too far in the direction of acceptance of disagreement and people should cut off their transphobic friends. But I think in order to have norms that would have kept Kemp and Cremer from facing some social consequences effective altruists would have had to adopt norms like “you are no longer allowed to make individual decisions about whom you’re friends with at all.”
But however open effective altruists try to be to criticism, it is difficult to avoid there being any career consequences to criticism. Outside of contests like the Red Teaming Contest or the Change Our Mind Contest, granters tend to fund researchers who produce work they broadly agree with. The Shrimp Welfare Project isn’t going to fund work by a researcher who believes that helping shrimp is a waste of time. The Nuclear Threat Initiative isn’t going to fund work by a researcher who thinks that America should massively expand its nuclear arsenal. Redwood Research, Anthropic, and MIRI are all going to hire researchers that follow their organizations’ research agendas, and none of them are going to hire someone whose opinion on AI capabilities is “damn the torpedos! Full speed ahead!”
Further, if I think someone’s work is good, I’m usually going to change my mind and agree with it. Not always! Sometimes we have deep worldview disagreements that lead us to different conclusions; sometimes I think someone is full of shit about one issue but that doesn’t affect their other work. But those are weird situations— the usual case is that, if I’m like “wow, those are amazing points,” I shift my beliefs in the direction of those points. If I’m a funder, I’m going to try to fund researchers I think are careful, insightful, informed, and nuanced—that is, I’m going to fund researchers who have traits that will tend to convince me of their viewpoints, which means that I agree with them.
Doing EA Better complains that effective altruists dismiss criticism by saying it is “poorly argued” and “likely net-negative” and that its author has “bad epistemics.” They claim it’s “EA code for “I disagree with this argument and I don’t like the author very much.”” However, consider the alternative possibility. If I disagree with an argument, then I very likely think that the implementation of the policies the argument suggests is net-negative. Further, I am very likely to be unconvinced by arguments which I feel are poorly argued by people with bad epistemics. And obviously if I think that your research is poorly argued, your proposed policies are likely net-negative, and you have bad epistemics, I’m not going to give you money. I am also less likely to want to be friends with you.
If you’re interested in saying things that piss off the people who give you money, then you should obtain some alternate source of money first. That isn’t the effective altruism movement being uniquely toxic. That is just how grants work and have always worked for every movement since the invention of grants. There are traditional solutions to the tendency of this reality to silence criticism that don’t rely on the people with money being hard to piss off, like tenure. I’d buy that we should look into them.10
Individually, however, the solution is simple: if you think effective altruism is fundamentally wrongheaded, don’t fucking rely on the effective altruist movement for money. Some options for having the free time to criticize effective altruism (paid subscriptions on Substack, academia, marrying rich) are quite competitive, but so are research jobs in effective altruism. And of course there’s always the option taken by people in social movements around the world: getting a day job and writing in your free time.
And, you know, all your friends shouldn’t be effective altruists anyway. Talk about a intellectual monoculture. Take up knitting and if you get #cancelled by the effective altruist community complain to your knitting friends.
Ideologies Are Fine Actually
The authors point out that, while effective altruism markets itself as “figuring out the best way to do good,” in fact:
This package, termed “EA orthodoxy”, includes effective altruism, longtermism, utilitarianism, Rationalist-derived epistemics, liberal-technocratic philanthropy, Whig historiography,11 the ITN framework, and the Techno-Utopian Approach to existential risk.
That is how literally every movement works. Feminism markets itself as “feminism is the radical notion that women are people,” but it actually has a bunch of common ideological beliefs: sexism is real; sexism harms women more than it harms men; gender roles are harmful; sex differences in mental traits are small and possibly nonexistent; it is important for women to be able to control their own bodies; women’s liberation is fundamentally linked to the liberation of other oppressed groups; and so on. You can disagree with a couple of them and still be a feminist, but if you disagree with a lot of them you should probably start your own movement. It is not in any way bad that you’re unlikely to be hired for a feminist organization if you think women are people but also believe that gender roles are great, sex differences in mental traits are huge, and reproductive rights shouldn’t be legally protected.
As it happens, I basically agree with most of the ideological claims of effective altruism and feminism, and so I’m an effective altruist and a feminist. If you don’t, then that’s fine, but you should go do your own thing instead of complaining that no one has succeeded in summing up the beliefs of their entire movement in a single sentence.
Overconfident Claims About What The Science Says
Doing EA Better claims that the science shows that worker’s cooperatives work so well that they’re far better choices for effective altruism than traditional nonprofit management strategies. Further, they claim that diversity is one of the most important predictors of whether a group comes to true beliefs, so much so that it’s more important than whether people actually have any domain expertise. Social and emotional intelligence is more important than whether people are good at their jobs.
Absolutely none of these claims are uncontested. Nonprofits are usually not run as workers’ cooperatives, because people believe a hierarchical structure works better. In fact, effective altruist organizations have a noticeably flat structure compared to other nonprofits, so if anything our knowledge of nonprofit management shows that our organizations should be more hierarchical. The evidence on whether diversity leads to better decision-making is mixed and rife with publication bias. And I challenge you to find me a study that shows that poorly qualified wambs make better decisions than a nerd with domain expertise.
Of course, a careful review of the evidence could convince me of any of those claims. And the post is long enough without such a lit review. But you can say “we think that the evidence is suggestive, and effective altruists should investigate it more and see if they should change their behavior.” The authors’ level of confidence is totally out of line with the evidence they provide, which makes me suspicious of the other arguments in their posts.
I Am Not Really Clear On How You Think The Effective Altruism Movement Is Supposed To Work
Okay, so, we get more domain experts who don’t agree with the effective altruist worldview or values. And we run our organization as a worker’s cooperative where every person gets an equal say. You… do realize that this is rapidly going to stop being an effective altruist organization, right? It’s going to turn into another generic charity. Of course, if you think the effective altruist movement doesn’t outperform generic charities, that’s fine! But I am baffled as to why you are posting on the Effective Altruism Forum instead of helping at a generic charity. There is no generic charity shortage. I am sure some of them are even worker’s cooperatives.
As regards democratic allocation of Dustin Moskovitz’s money, I can’t put it better than Dustin Moskovitz himself:
If folks don't mind, a brief word from our sponsors...
I saw Cremer's post and seriously considered this proposal. Unfortunately I came to the conclusion that the parenthetical point about who comprises the "EA community" is, as far as I can tell, a complete non-starter.
My co-founder from Asana, Justin Rosenstein, left a few years ago to start oneproject.org, and that group came to believe sortition (lottery-based democracy) was the best form of governance. So I came to him with the question of how you might define the electorate in the case of a group like EA. He suggests it's effectively not possible to do well other than in the case of geographic fencing (i.e. where people have invested in living) or by alternatively using the entire world population.
I have not myself come up with a non-geographic strategy that doesn't seem highly vulnerable to corrupt intent or vote brigading. Given that the stakes are the ability to control large sums of money, having people stake some of their own (i.e. become "dues-paying" members of some kind) does not seem like a strong enough mitigation. For example, a hostile takeover almost happened to the Sierra Club in SF in 2015 (albeit for reasons I support!).There is a serious, live question of what defines an EA right now. Are they longtermists? Do they include animals in the circle of moral concern? Shrimp? I'm not sure how you could establish a clear membership criteria without first answering these questions, and that feels backwards. I do think you could have separate pools of money based on separate worldviews, but you'd probably have to cut pretty narrowly which defeats the point.
Further, Doing EA Better proposes:
A certain proportion of EA funds should be allocated by lottery after a longlisting process to filter out the worst/bad-faith proposals.
…how is that not just giving up on the core effective altruist claim that some charities are hundreds of thousands of times better than others and it is possible to find them?
The thing that the authors of Doing EA Better want to do is very interesting. (Someone in the comments of the post called it “democratic altruism,” which I think is a good name.) I fully support them in starting a different movement that does democratic altruism. If democratic altruism turns out to be a better way of improving the world than effective altruism, I will switch. But if you disagree with many core effective altruist claims like “some charities are hundreds of thousands of times better than others”, and with many core aspects of the worldview like consequentialism, longtermism, rationalist-derived epistemics, and the importance/neglectedness/tractability framework… effective altruism is probably not the right movement for you.
What I want to know is where the authors found a movement full of people who don’t wear movement-branded shirts.
This is kind of my brand.
Oddly, in my experience, the effective altruism community is a really great place to have a mental illness socially constructed as a severe mental illness. As someone who is openly severely mentally ill, I’ve found that EAs as a whole trust me as an authority on my own condition, don’t dismiss my work because of my mental illness, are eager to accommodate my needs, and are genuinely both sympathetic and outraged about the structural ways that people like me are mistreated.
Several of the authors’ suggestions would make effective altruism less accessible for neurodivergent people. “Your emotional and social intelligence is more important than your ability to do tasks,” for example, explicitly devalues the contributions of many autistics, psychotic people, traumatized people, and people with personality disorders. Further, “people should read books and not blog posts” excludes many autistics, people with ADHD, people with literacy issues, and other people who struggle reading long texts. Best practices for disability inclusion are that information should be available in multiple formats (audio, video, books, articles, infographics…) so that everyone can have access to information. The effective altruism community does an excellent job with this aspect of disability inclusion (although as always it’s lacking in many other areas).
Unless the complaint is that we’ve all standardized on a particular vegan ready-meal and are not eating other, equally good vegan ready-meals. If so, I would appreciate brand recommendations. I for one will make the sacrifice for inclusion here.
And space in fact is really cool.
Which is not centrist and is actually quite radical. I wish the centrist opinion was unsure whether America should have full open borders or merely increase immigration by four hundred percent.
Eyeroll.
But, like, leftist ones and not Fighting Wokeness As Cause X.
Although god the competition for tenure-track positions at effective altruist organizations would make academia look like Stardew Valley.
I recommend reading this excellent post about the definition of Whig history. “The world is awful, the world is much better, the world can be much better” is a central effective altruist belief but it is not the same thing as Whig history (which I’m opposed to). Effective altruists are not as careful as they should be to avoid Whig history but I don’t think the temptation is central to the ideology.
It's worth noting that atheists, autistics, Jews, and East Asians are *not* privileged groups.
"Finally, some of this homogeneity is about things that are… true?"
Yeah, I was a bit surprised that this wasn't paragraph one of your response. I read that list going, "oh shit, there aren't enough pro-rapists or believers in the Four Humours theory of disease!"
Being at least consequentialist-adjacent in ethics is pretty much the central concept of the movement. I guess there might be a person in the Bruce Springsteen Fan Club who doesn't enjoy listening to music, but I wouldn't be shocked that there weren't many.
And thinking space is cool is like thinking puppies are cute or thinking chocolate tastes good. How many people does it filter out?
FWIW, I'm not connected to the organized movement except tangentially online, live far away, and 100% of my donations are to vegan outreach in countries where money goes substantially farther than it does for me.