The most common way to identify the effective altruism movement is by pointing. Effective altruists are:
The people who try to help the global poor in ways backed up by randomized controlled trials, generally favoring public health programs.
The people who try to improve conditions for farmed animals through corporate outreach and policy advocacy, and who care about weird groups of animals like insects and wild animals.
The people who spend a lot of time worrying that we’re going to make artificial intelligences and they’re going to kill everyone.
This raises many questions, like what the heck do these three groups have to do with each other and what they get out of posting on the same forums and going to the same conferences.
Perhaps we could seek out definitions that would help us. The Center for Effective Altruism says effective altruism is “a framework and research field that encourages people to combine compassion and care with evidence and reason to find the most effective ways to help others.” The essay Effective Altruism is a question (not an ideology)1 says effective altruism is the attempt to answer the question “How can I do the most good, with the resources available to me?”
These definitions are unsatisfying, because everyone who is trying to be a good person supports those things. No one is against care and compassion. No one wants to do less good than they could for no reason. No one is like “I specifically hate evidence and reason and I’m going to make my charitable decisions using astrological charts.” Tautologically, people don’t use resources they don’t have.
Often, EAs respond to critique of effective altruism with what Michael Nielsen calls EA judo:
One of the most common lines of "attack" on EA is to disagree with common EA notions of what it means to do the most good. "Are you an EA?" "Oh, those are the people who think you need to give money for malaria bed nets [or AI safety, or de-worming, /etc etc/], but that's wrong because […].”…
These statements may or may not be true. Regardless, none of them is a fundamental critique of EA. Rather, they're examples of EA thinking: you're actually participating in the EA project when you make such comments. EAs argue vociferously all the time about what it means to do the most good. What unites them is that they agree they should "use evidence and reason to figure out how to do the most good"; if you disagree with prevailing EA notions of most good, and have evidence to contribute, then you're providing grist for the mill driving improvement in EA understanding of what is good…
Most external critics who think they're critiquing EA are critiquing a mirage. In this sense, EA has a huge surface area which can only be improved by critique, not weakened… A pleasant, informative example is EA Rob Wiblin interviewing Russ Roberts, who presents himself as disagreeing with EA. But through (most of) the interview, Roberts tacitly accepts the basic ideas of EA, while disagreeing with particular instantiations. And Wiblin practices EA judo, over and over, turning it into a very typical EA-type debate over how to do the most good. It's very interesting and both participants are very thoughtful, but it's not really a debate about the merits of EA.
I am an occasional practitioner of EA judo myself. But it doesn’t take opposition to effective altruism seriously on its own terms. From an EA judoka’s perspective, everyone is a temporarily embarrassed effective altruist who simply needs to be brought into the fold.
My belief is that effective altruists do unusual things, because they believe unusual things. Other people don’t act like effective altruists, because they disagree with effective altruists. These disagreements are about fundamental worldview matters, so they can be hard to talk about or even identify. But they are real.
Most definitions of effective altruism are unsatisfying, because their goal is primarily persuasive: their purpose is to get you to become an effective altruist, or to change the direction of the effective altruist movement. The former kind of definition tries to make effective altruism seem obvious and uncontroversial, and to smuggle in the broader worldview through the back door. The latter kind of definition makes claims that the author would like to be true of effective altruism, but that often aren’t.
I intend to write a post series that defines effective altruism in a way that’s anthropological. What do effective altruists believe that other people tend not to believe? Why do they believe that? What do the vegans and the kidney donors, the AI safety researchers and the randomized controlled trial lovers, have in common?
Here are the beliefs that I think are distinctive to effective altruists:
Welfarist, maximizing consequentialism: you should take actions that cause people to be as well-off as possible.
Moral circle expansionism: the well-being of all beings capable of well-being matters more-or-less equally; in particular, you should disregard special relationships and moral desert.
Quantitative mindset: reasoning with numbers is a useful tool that sheds light on many problems.
Taking ideas seriously: if logic and evidence suggest that a particular conclusion is true, but it seems absurd, work under the assumption that it’s true.
Rationalist epistemics: Bayesianism, cognitive biases, and the free marketplace of ideas.
Ambition: you should strive to achieve big things and have a large effect on the world.
Importance, tractability, neglectedness: The correct way to pick causes is based on how important they are, how easy it is to make changes, and how much effort is already being directed into them.
The effective altruist narrative of history: the past was terrible; the future will be weird.
The effective altruist approach to politics: small-l liberal, capitalist, and technocratic.
Over the next few weeks, I hope to publish a post that elaborates on each of these points.
Which is a good essay!
This is the first "what is an EA" article that makes me feel like an EA!
I've never signed up for the "you are morally obligated to behave altruistically" part of EA, but your points 1-9 are pretty much what I use to evaluate "what projects are cool/exciting/worth contributing to?"
this is going to be a good series. Looking forward to it. I'm especially interested in this one: "in particular, you should disregard special relationships and moral desert." Because I think that's a good capsulation and clarifies a difference or conflict with care-ethics. Care is built upon relationships -which are, according to care-ethics theorists, definitional to care. And care relationships are special and distinct from other relationships, usually formed out of familial connections or emotional affinity. I'm really interest in care ethics and how they can/do apply to policy. But wrestling with this different with EA will be useful.