
Disclaimer: For ease of writing, I say “effective altruism says this” or “effective altruists believe that.” In reality, effective altruism is a diverse movement, and many effective altruists believe different things. While I’m trying my best to describe the beliefs that are distinctive to the movement, no effective altruist (including me) believes everything I’m saying. While I do read widely within the effective altruism sphere, I have an inherently limited perspective on the movement, and may well be describing my own biases and the idiosyncratic traits of my own friends. I welcome corrections.
I’ve heard it said that the two principles of rationality are:
Do the math.
Pay attention to the math.
I already wrote about the first one. Here’s the second one.
I used to think that the hard part was knowing what the right answer is. Once you know it’s essentially impossible to win the lottery, then all you have to do is not buy a ticket, which is trivial. If I know I’m going to lose money on a bet in expectation,1 then it holds no appeal for me.
However, many people disagree with me on this point, fortunately for casinos everywhere. You can know perfectly well that buying a lottery ticket offers no meaningful increase in your chance of winning the lottery compared to walking along the street looking for dropped lottery tickets, and still prefer to buy a ticket.
And it’s not just math. It’s all kinds of abstract arguments: philosophy, religion, science, politics. You read the drowning child argument and start donating 10% of your income to charity. You learn about Buddhism, sell all your possessions, fly to Vietnam, and become a monk. You read Atlas Shrugged and become a libertarian—or Das Kapital and become a Marxist.
In the Screwtape Letters, C. S. Lewis has his demon Screwtape explain to a young tempter:
It sounds as if you supposed that argument was the way to keep [a human] out of the Enemy's clutches. That might have been so if he had lived a few centuries earlier. At that time the humans still knew pretty well when a thing was proved and when it was not; and if it was proved they really believed it. They still connected thinking with doing and were prepared to alter their way of life as the result of a chain of reasoning. But what with the weekly press and other such weapons we have largely altered that. Your man has been accustomed, ever since he was a boy, to have a dozen incompatible philosophies dancing about together inside his head.
Lewis is, I think, overoptimistic about people of the past. Most people behave—and I think have always behaved—in the way Screwtape would like them to. They believe that anyone who doesn’t accept Jesus Christ as their personal savior will go to Hell; they have premarital sex and skip church to take the kids to travel soccer. They teach a class about animal ethics and the moral necessity of veganism, then they walk to the campus cafeteria to buy a bacon cheeseburger.
Taking ideas seriously is, I think, the most dangerous of the effective altruist beliefs I’ve been talking about—and, fortunately, effective altruists have been becoming more aware of its danger. If your abstract beliefs don’t affect what you do, if you make decisions based on social norms and what seems convenient, then you’ll get basically the same outcomes everyone else around you gets. You will be a reasonably functional member of society who is about as happy as everyone else.
But if you’re making decisions based on the math, you really have to get the math right. I hope my examples make it clear that, from any perspective, most people who take ideas seriously have made major mistakes in their reasoning and therefore are doing numerous things that are somewhere between pointless and evil. The track record of taking ideas seriously should make even the most dedicated effective altruist feel qualms.
Recent events have driven home to many rationalists and effective altruists how dangerous it is to take ideas seriously. The Zizians, an extremist splinter group of rationalists/effective altruists, have been linked to eight deaths.2 The Zizians committed suicide and murder in part because they took ideas seriously. For example, their decision theory says that you should always escalate in response to threats, including from the law; naturally, they wound up in a shootout with police officers.
Given the risks of taking ideas seriously, it makes sense that a lot of people would reject it. Better to act according to social norms. This not only allows you to eat bacon cheeseburgers and go to travel soccer games, it also substantially decreases your chance of winding up on the FBI’s Most Wanted list.
Of course, if no one ever takes ideas seriously, slavery never gets abolished.
Or women are still the property of men, or we’re ruled by absolute monarchs, or you can be imprisoned for saying something those in power don’t like, or oral sex is illegal. Pick your favorite piece of moral progress, behind it is a team of wackaloon idealists who took a bunch of ideas very seriously. If you do what everyone else does, you’ll never underperform your society—but you’ll never outperform it either.
So how do we take ideas seriously in the good way (where we abolish slavery) and not in the bad way (where we stab our landlords with katanas)?
One strategy is to have lines that you won’t cross no matter what: “you should do the thing that your math says leaves everyone the best off, unless it involves running your landlord through with a katana.”3 Effective altruists commonly refuse to cross the normal lines (murder, rape) but also often have lines they won’t cross about freedom of speech and certain kinds of deception.
Similarly, Will MacAskill and a lot of other effective altruists have been doing work about moral uncertainty: making the ethical decision when you’re unsure what your ethical beliefs are. For example, (some people argue) if a decision is very wrong on some plausible moral views, but neutral on other plausible moral views, then you shouldn’t do it. I don’t have space to get into moral uncertainty here, but it’s a rich and interesting field.
Finally, a lot of effective altruist thought is about where you get off the crazy train. I really recommend the whole essay I linked, but I want to highlight the author’s Principle 4:
Principle 4: When arguments lead us to conclusions that are both speculative and fanatical, treat this as a sign that something has gone wrong.
By “speculative,” the author means “a claim requiring a lot of theoretical reasoning that extends beyond our direct, observational evidence.” By “fanatical,” the author means “you’re willing to endorse all of (the world's, EA's, whoever's) resources towards that cause area, even when your probability of success looks low.”
I have also found that effective altruists generally accept Principle 4, although in a kind of unprincipled way that we’re all a bit shifty about. In general, the more speculative a claim is, the fewer resources you should put into it—though effective altruists differ about how much fewer. Even if you have a solid philosophical argument that the most important cause is acausal trade with infinite aliens, very few people should work on it. As AI risk has become more concrete, a higher percentage of effective altruist resources have gone into AI.
Even with these caveats and exceptions, I think effective altruists are very notable for their tendency to, in Screwtape’s words, connect thinking with doing and be prepared to alter their way of life as the result of a chain of reasoning. We respond to concerns about taking ideas seriously in part by coming up with a whole new set of ideas to take seriously! Reorienting your entire life because you read an essay is very effective altruist, even if you’re reorienting it towards being less affected by essays.
Obviously, I might enjoy playing a game and be willing to pay a certain amount of money as the cost of playing it.
Wired’s article is the best I’ve seen on the subject. I knew several Zizians personally before they joined the cult, and would appreciate everyone commenting here on the Zizians to bear that fact in mind.
The philosophy term for this is “side-constraints.”
> Similarly, Will MacAskill and a lot of other effective altruists have been doing work about moral uncertainty: making the ethical decision when you’re unsure what your ethical beliefs are. For example, (some people argue) if a decision is very wrong on some plausible moral views, but neutral on other plausible moral views, then you shouldn’t do it. I don’t have space to get into moral uncertainty here, but it’s a rich and interesting field. [...] By “fanatical,” the author means “you’re willing to endorse all of (the world's, EA's, whoever's) resources towards that cause area, even when your probability of success looks low.”
I have a post on why one should not only take moral uncertainty seriously, but also act on it: https://substack.com/@bobjacobs/p-157957582
However, while I agree (with you and with MacAskill and co) that we should take moral uncertainty seriously, I would disagree with the idea that his theory of moral uncertainty helps us avoid "fanaticism". See for example this post, or the corresponding academic paper: https://forum.effectivealtruism.org/posts/Gk7NhzFy2hHFdFTYr/a-dilemma-for-maximize-expected-choiceworthiness-mec
If you e.g. give a tiny bit of credence to the theory that your immortal soul might end up in the Christian heaven or hell, MacAskill's theory says you should follow Christianity 100% of the time. Unless you e.g. think Islam is also not literally impossible, then you have to split it with Islam, so I guess it's better than "normal" fanaticism, but it's still not great. And it's not just with religions, this also becomes a problem with e.g. AI that promises infinite payoff (am I doing a social faux-pas by speculating that this mindset might help explain why MacAskill and co fell for the FTX scam?).
If you want to avoid fanaticism there are some non-EAs who have developed theories of moral uncertainty that do that.
For example "My Favorite Option" by Gustafsson & Torpman https://johanegustafsson.net/papers/in-defence-of-my-favourite-theory.pdf (I don't recommend this one, it has some bad features), "k-Trimmed Highest Mean" by Jazon Szabo et al. https://arxiv.org/abs/2312.11589 (pretty good, but kinda ad hoc), and "Runoff Randomization" by Heitzig & me https://bobjacobs.substack.com/p/resolving-moral-uncertainty-with (good, but of course I would say that)
I consider myself an effect altruist yet I think I don't take ideas very seriously at all.
For years I was leftist and I was also a Christian. I have been convinced in the past by things I now think are wrong. So I only take actions if I am very, very convinced. I am very, very convinced that it is right to donate money money to people in dire poverty so I do that. I think this is fine - there's lots of cause areas out there and I have limited resources so I focus on only the very sure.
I find arguments about the danger of AI quite compelling but they don't pass my threshold of "very, very" so I just don't have anything to do with that. I'm glad there is people out there working on it but for me personally I'll sit this out.
I treat ideas lightly; I'm interested in ideas and debates and I'm happy to hear them because I know I'm unlikely to change my actions. But I try to keep an open mind, because there might be more ideas out there that will one day cross my threshold of certainty.