Discussion about this post

User's avatar
Bob Jacobs's avatar

> Similarly, Will MacAskill and a lot of other effective altruists have been doing work about moral uncertainty: making the ethical decision when you’re unsure what your ethical beliefs are. For example, (some people argue) if a decision is very wrong on some plausible moral views, but neutral on other plausible moral views, then you shouldn’t do it. I don’t have space to get into moral uncertainty here, but it’s a rich and interesting field. [...] By “fanatical,” the author means “you’re willing to endorse all of (the world's, EA's, whoever's) resources towards that cause area, even when your probability of success looks low.”

I have a post on why one should not only take moral uncertainty seriously, but also act on it: https://substack.com/@bobjacobs/p-157957582

However, while I agree (with you and with MacAskill and co) that we should take moral uncertainty seriously, I would disagree with the idea that his theory of moral uncertainty helps us avoid "fanaticism". See for example this post, or the corresponding academic paper: https://forum.effectivealtruism.org/posts/Gk7NhzFy2hHFdFTYr/a-dilemma-for-maximize-expected-choiceworthiness-mec

If you e.g. give a tiny bit of credence to the theory that your immortal soul might end up in the Christian heaven or hell, MacAskill's theory says you should follow Christianity 100% of the time. Unless you e.g. think Islam is also not literally impossible, then you have to split it with Islam, so I guess it's better than "normal" fanaticism, but it's still not great. And it's not just with religions, this also becomes a problem with e.g. AI that promises infinite payoff (am I doing a social faux-pas by speculating that this mindset might help explain why MacAskill and co fell for the FTX scam?).

If you want to avoid fanaticism there are some non-EAs who have developed theories of moral uncertainty that do that.

For example "My Favorite Option" by Gustafsson & Torpman https://johanegustafsson.net/papers/in-defence-of-my-favourite-theory.pdf (I don't recommend this one, it has some bad features), "k-Trimmed Highest Mean" by Jazon Szabo et al. https://arxiv.org/abs/2312.11589 (pretty good, but kinda ad hoc), and "Runoff Randomization" by Heitzig & me https://bobjacobs.substack.com/p/resolving-moral-uncertainty-with (good, but of course I would say that)

Expand full comment
mmmmmm's avatar

I consider myself an effect altruist yet I think I don't take ideas very seriously at all.

For years I was leftist and I was also a Christian. I have been convinced in the past by things I now think are wrong. So I only take actions if I am very, very convinced. I am very, very convinced that it is right to donate money money to people in dire poverty so I do that. I think this is fine - there's lots of cause areas out there and I have limited resources so I focus on only the very sure.

I find arguments about the danger of AI quite compelling but they don't pass my threshold of "very, very" so I just don't have anything to do with that. I'm glad there is people out there working on it but for me personally I'll sit this out.

I treat ideas lightly; I'm interested in ideas and debates and I'm happy to hear them because I know I'm unlikely to change my actions. But I try to keep an open mind, because there might be more ideas out there that will one day cross my threshold of certainty.

Expand full comment
34 more comments...

No posts