Effective altruism is a form of maximizing, welfarist consequentialism.1
“Consequentialism,” as I’m using it, means that the primary criterion for whether an action is good is its effect on the world. A non-consequentialist ethical system is mainstream Judaism. You’re supposed to perform mitzvot such as keeping kosher, even if (often, especially if) keeping kosher does no good for the world at all. The rationale for keeping kosher doesn’t involve balancing remembrance of your covenant with God with how tasty bacon cheeseburgers are. You’re supposed to keep kosher because the rules say so.
Of course, Judaism has many consequentialist elements, such as tikkun olam. I tend to think of consequentialism as a spectrum. Almost no one is completely consequentialist or completely non-consequentialist. But some people tend, on average, to care more about the consequences of their actions, and some people tend, on average, to care more about other things.
In my experience, most effective altruists are a little bit non-consequentialist: we believe that (say) it is wrong to brutally murder innocent people even if you were sure it would have positive consequences. But those qualms don’t come up in effective altruist thinking much: the effective altruist charity evaluator GiveWell has never had to consider Assassins Without Borders for Top Charity status. So effective altruism is basically consequentialist.
“Welfarism” means that, if something is good, then it has to be good for someone (that is, that it increases their well-being). Many people assume that welfarists only care about happiness or pleasure. In reality, welfarism can refer to anything that is good or bad for a specific person: freedom, virtue, beauty, the ability to fully develop their gifts. If you’re one of those pseudo-Nietzschean Greek-statue-avatar people, well-being is the ability to crush your enemies, see them driven before you, and to hear the lamentations of their women.
Like consequentialism, everyone is at least a little bit welfarist. The Stanford Encyclopedia of Philosophy (above link) says “A theory which said that [well-being] just does not matter would be given no credence at all,” which is the strongest statement I’ve ever read it make about morality. But many people care about non-welfarist goods. For example, you might value the existence of the Mona Lisa or the Notre Dame Cathedral, separately from whether anyone benefits from their existence. Or you might care about the preservation of intact natural ecosystems, separately from their economic benefit for humans or the happiness of the animals who live there. Or you might think that it’s better for everyone to be equally okay than for some people to be flourishing and some people to be miserable.
“Maximization” means that we want things to be as good as possible. Again, no one is like “I specifically want things to be worse than they could be.” If a genie appeared to you and said “do you want me to eliminate hookworm? I promise it has no bad side effects, the only thing that will happen is that children are no longer infected with parasitic worms,” everyone says ‘yes.’
But many people have an intuition that morality should only be so demanding.2 You’re supposed to be kind to those around you and work hard at your job and tip generously and not murder anyone; once that’s done, you get to spend your time knitting and writing mediocre fanfic. You’re supposed to give to charities that have a positive effect on the world, but if your emotions are stirred more by children with cancer than by children with malaria, there’s no reason to donate to the latter.
A general trend which I hope you’ve caught onto is that effective altruism is simpler. Everyone is a little bit maximizing, everyone is a little bit welfarist, everyone is a little bit consequentialist. Effective altruism involves going all-in on these intuitions everyone shares.
Moral foundations theory is the theory that human moral reasoning relies on multiple moral intuitions. While different researchers give different lists, the “standard five” are:
Care: wanting people to be happy and not to suffer.
Care commands that you feed the hungry, clothe the naked, and avoid kicking adorable puppies for no reason.
Fairness: treating people justly, according to what they deserve.
Fairness commands that you pay your debts, pay your workers a reasonable wage, and punish wrongdoers.
Loyalty: helping members of your ingroup; patriotism, filial piety.
Loyalty commands that you visit your mother, stick with your friends instead of dropping them for cooler new friends, and die for your country in battle.
Authority: obedience to those in power over you; respect for tradition.
Authority demands that you follow the law, do what your boss tells you to do, and follow the ancient customs of your culture.
Purity: avoiding things which are disgusting or contaminating, and seeking out things which are sanctified and pure.
Purity demands that you honor the flag and your religion’s holy symbols, follow your religion’s dietary rules, and don’t have weird sex.
Most social psychology is bullshit, and I don’t think these are the exact five inborn moral intuitions or anything. But I think moral foundations theory’s basic insight is true: people draw their moral intuitions from multiple sources. As long as one of these impulses is “care,” I think my argument in the next few paragraphs goes through.
A lot of people associate effective altruism with utilitarianism. I don’t think that’s true. Many prominent effective altruists—such as Will MacAskill and Toby Ord—don’t identify as utilitarians. But I do think that effective altruism involves weighting “care” far above the other four moral foundations. Effective altruists generally are concerned about fairness and loyalty,3 at least as a tool for making sure people are better off. But they often see purity, authority, and even loyalty as not only distractions but as biases, mistaken impulses that push people towards wrongdoing—and effective altruists can certainly make the case by mustering any number of disgust-laden anti-Semitic rants, “I was just following orders” excuses for atrocities, and well-paying sinecures given to undeserving nephews.
My point here isn’t to defend the effective altruist impulse to care, however. I’m not going to reason you out of your moral system. But I think something like moral foundations theory is why effective altruist morality often seems like normal morality but simpler. The care foundation tends to be welfarist (obviously) but also consequentialist and maximizing. If you care about someone, you want your actions to actually leave them better off. And suffering is bad no matter what, while most people feel okay going “eh, good enough” about the amount of weird sex they’re having. Most people intermingle the care foundation with other non-welfarist, non-consequentialist, non-maximizing foundations. Effective altruists don’t.
Because effective altruists care about things everyone cares about, the findings of effective altruism can be useful, even for people with more complex moral systems. GiveWell tells you how to most efficiently turn money into a lower mortality rate and higher consumption for human beings. If you have a strong Loyalty foundation, unlike most effective altruists, you’re unlikely to give 10% of your income to GiveWell-recommended charities when you could spend it on your kids. But if you have an extra $100, your Care foundation might nudge you to donate it to the GiveWell All Grants Fund. In this way, the existence of effective altruists can help advance the goals even of people who aren’t effective altruists. This kind of cooperation is something I hope to encourage in this series.
I’m sorry about the philosophical jargon but I promise these words are simpler than they look.
It’s possible to have maximizing moral ideologies that aren’t very demanding, like certain forms of ethical egoism (the belief that everyone should act in their enlightened self-interest).
Some people might doubt that effective altruists care about loyalty, but I think they do—effective altruists generally think it’s good to visit your own mother in the nursing home, and bad to instead try to redistribute the visits to the loneliest old person there.
I've always liked the "simplest possible, rudest possible" summary of the three main metaethics:
Deontology: "I was just following orders"
Virtue Ethics: "Intent, in fact, IS magic"
Consequentialism: "The end justifies the means"
Works pretty well at explaining what the three actually are.
I almost agree, with one disagreement. my understanding of EA is more... directional. you say "Effective altruism involves going all-in on these intuitions everyone shares." and... this is just don't look true - most EAs does not go all-in. they just... go more then the median, that is very not-that.
and that is my sentence on EA. my morality is some CEV that i myself have no idea what it, but i can see the direction clearly. there is level of consequentialism that will be too much for me, but on the current margin, i want to move charity toward welfarism and consequentialism. I also very suspicious of maximization per-se, but I'm much more maximizing then the median.
i think there is tails-come-apart thingy going one now. because there is such not-market inefficiency from WC point of view (or maybe just care-foundation point of view?), a lot of people who want more of that are part of EA. but if we win, if we feed all the hungry and prevent all the torture (and secure the future against treats), then the disagreements will surface.
there is a lot on now-unseen disagreement in the form of value A is 10 more important that value B or the other way around, that is dwarfed by best interventions being orders of magnitudes better.
in a world when people are 10-30% welfarist-conseqencialist, everyone who is more then 50% WC look the same, but there is actually big difference between 50% CW and 90% CW (and i don't believe in 100% CW).
also, i really don't think EA is that much maximizing. it's just... humans are not naturally strategic, the median maximization is, like, 3%, so people who are 20% maximizers look like a lot.