36 Comments
User's avatar
Bob Jacobs's avatar

> Similarly, Will MacAskill and a lot of other effective altruists have been doing work about moral uncertainty: making the ethical decision when you’re unsure what your ethical beliefs are. For example, (some people argue) if a decision is very wrong on some plausible moral views, but neutral on other plausible moral views, then you shouldn’t do it. I don’t have space to get into moral uncertainty here, but it’s a rich and interesting field. [...] By “fanatical,” the author means “you’re willing to endorse all of (the world's, EA's, whoever's) resources towards that cause area, even when your probability of success looks low.”

I have a post on why one should not only take moral uncertainty seriously, but also act on it: https://substack.com/@bobjacobs/p-157957582

However, while I agree (with you and with MacAskill and co) that we should take moral uncertainty seriously, I would disagree with the idea that his theory of moral uncertainty helps us avoid "fanaticism". See for example this post, or the corresponding academic paper: https://forum.effectivealtruism.org/posts/Gk7NhzFy2hHFdFTYr/a-dilemma-for-maximize-expected-choiceworthiness-mec

If you e.g. give a tiny bit of credence to the theory that your immortal soul might end up in the Christian heaven or hell, MacAskill's theory says you should follow Christianity 100% of the time. Unless you e.g. think Islam is also not literally impossible, then you have to split it with Islam, so I guess it's better than "normal" fanaticism, but it's still not great. And it's not just with religions, this also becomes a problem with e.g. AI that promises infinite payoff (am I doing a social faux-pas by speculating that this mindset might help explain why MacAskill and co fell for the FTX scam?).

If you want to avoid fanaticism there are some non-EAs who have developed theories of moral uncertainty that do that.

For example "My Favorite Option" by Gustafsson & Torpman https://johanegustafsson.net/papers/in-defence-of-my-favourite-theory.pdf (I don't recommend this one, it has some bad features), "k-Trimmed Highest Mean" by Jazon Szabo et al. https://arxiv.org/abs/2312.11589 (pretty good, but kinda ad hoc), and "Runoff Randomization" by Heitzig & me https://bobjacobs.substack.com/p/resolving-moral-uncertainty-with (good, but of course I would say that)

Expand full comment
Matrice Jacobine's avatar

> (am I doing a social faux-pas by speculating that this mindset might help explain why MacAskill and co fell for the FTX scam?).

I think it's a fairly relevant thing to mention here, I was actually going to suggest to Ozy replacing the mention of the Ziz case to the FTX case if the former is "too soon" and people aren't normal about it in the comments.

Expand full comment
Ozy Brennan's avatar

As I'm sure I've said to both of you before, I don't actually think FTX was caused by taking ideas seriously, I think it was caused by the normal things that cause white-collar crime (bad norms and delusional optimism).

Expand full comment
Doug S.'s avatar

I agree. I don't think their crimes were actually caused by taking ideas seriously, but it seems that they did use taking ideas seriously as a way to justify their crimes to themselves and each other.

Expand full comment
Matrice Jacobine's avatar

I've seen you say this before, though I don't recall it being to either of us. I think you may be right about SBF specifically considering his evident opportunism re e.g. going on Tucker (though many have associated his actions with his past statements on St. Petersburg paradox), but I'm less convinced for worldoptimization considering the general philosophy she was espousing on Tumblr.

Expand full comment
Anonymous Dude's avatar

Honestly, from what I remember she struck me as a standard rationalist, except being female in a group of mostly men who don't see a lot of action she could trade on that.

But, maybe she kept all her really incriminating thoughts to herself. Many people do.

Expand full comment
JC's avatar

Is worldoptimization ziz or one of the zizzies?

How could she trade on it - isn't she trans?

Expand full comment
Matrice Jacobine's avatar

I mean yeah that's what I meant, she seemed like a regular earning-to-giver fairly cynical about her entire field of work. (Which to be clear I am more sympathetic to than SBF's "Go on Tucker Carlsen [sic], come out as a republican [sic]" bit lol.)

Expand full comment
Bob Jacobs's avatar

I guess that, except for Heitzig and me, I can't say for certain that they aren't EAs. Gustafsson has an EA Forum account, so he might be one, but I couldn't find anything definitive online (and Heitzig and I also have accounts, so that's no proof on its own). The others don’t have EA Forum accounts (or at least not public ones), and seem to come mostly from the AI-ethics crowd, which has long clashed with the AI-safety crowd over whether we should be skeptical of AI capitalists, or party with them.

Although, now that Altman and co have staged a coup d'état against the EAs and kicked them to the curb (something anticipated by anyone who's ever seen a socialist pamphlet zip by in an airplane) the relationship is starting to shift. So it's not impossible that they are both AI ethics scholars and EAs.

Expand full comment
Matrice Jacobine's avatar

> Although, now that Altman and co have staged a coup d'état against the EAs and kicked them to the curb (something anticipated by anyone who's ever seen a socialist pamphlet zip by in an airplane) the relationship is starting to shift. So it's not impossible that they are both AI ethics scholars and EAs.

The OpenAI coup attempt was in November 2023 and the TESCREAL paper was published in April 2024.

Expand full comment
JC's avatar

Did you see this paper?

https://globalprioritiesinstitute.org/wp-content/uploads/Hayden-Wilkinson_In-defence-of-fanaticism.pdf

It seems very hard to avoid fanaticism - you have to bite a lot of bullets!

Expand full comment
mmmmmm's avatar

I consider myself an effect altruist yet I think I don't take ideas very seriously at all.

For years I was leftist and I was also a Christian. I have been convinced in the past by things I now think are wrong. So I only take actions if I am very, very convinced. I am very, very convinced that it is right to donate money money to people in dire poverty so I do that. I think this is fine - there's lots of cause areas out there and I have limited resources so I focus on only the very sure.

I find arguments about the danger of AI quite compelling but they don't pass my threshold of "very, very" so I just don't have anything to do with that. I'm glad there is people out there working on it but for me personally I'll sit this out.

I treat ideas lightly; I'm interested in ideas and debates and I'm happy to hear them because I know I'm unlikely to change my actions. But I try to keep an open mind, because there might be more ideas out there that will one day cross my threshold of certainty.

Expand full comment
Anonymous Dude's avatar

There also isn't much most of us can do about AI. What are we going to do, shut down Anthropic or OpenAI? It's not as if we can trust them to implement whatever 'alignment' strategy we can come up with. Businesses are great at getting around 'compliance' rules when their bottom line is affected.

I agree with you, though; I've changed my mind enough times I doubt EA or any other philosophy has all the answers. Why would our dumb little monkey brains be able to come up with the answers anyway? Eat, acquire, f***, breed if that's your thing and try to raise them to maturity. It's all most people have done throughout history anyway.

Expand full comment
Jessie Ewesmont's avatar

> One strategy is to have lines that you won’t cross no matter what: “you should do the thing that your math says leaves everyone the best off, unless it involves running your landlord through with a katana.”

There's an ethical system called threshold deontology, which is something very roughly along the lines of: follow strict moral rules, like "don't lie", until the negative consequences of following those rules surpasses a certain threshold (say, until not lying would cause someone to die). Once the stakes are high enough, become a consequentialist. This seems to me to be the reverse version of that. "Threshold utilitarianism"?

Anyway, trying to figure out what those lines you won't cross should be is a thorny question. After all, we do still want to make sure we can end slavery, and it's easy to imagine someone who lived in the past going "but of course treating slaves as equal would be one of the lines I won't cross". Deontologists, both threshold and regular, have a lot of interesting things to say about how to formulate and select moral rules. I suspect Effective Altruists should pay more attention to deontological ethical theory, even if they're committed consequentialists, because they can often glean good and relevant insights from them.

PS: I suspect Principle 4 condemns certain forms of longtermism. But maybe that's a plus for you. It is for me.

Expand full comment
Pan Narrans's avatar

Late to the party, but: there's also the fact that most social progress is incremental. Like: yes, America outlawed slavery, probably the single greatest moral victory in its history. But it didn't outlaw slavery, equalize voting rights, elect a black president, and establish equal representation across leaders of industry all in the same week.

From the point of view of people debating the topic at the time, the idea that there would ever be a black president would have been seen as genuinely crazy. In fact, it sounds like something the pro-slavery side would have raised as a slippery slope argument.

I think this is something that supporters of utopian projects like universal basic income and the Green New Deal don't get. Even if those are ultimately good ideas, and a better future will eventually include them, you're not gonna get them in one fell swoop.

So you build on the progress of yesterday. Fortunately, the moral arc of the universe still seems to bend toward justice, despite certain recent events.

Expand full comment
Matrice Jacobine's avatar

The arc of the moral universe is long, but it can easily be cut short by the hinge of history.

Expand full comment
Anonymous Dude's avatar

Who's to say there is even an arc of the moral universe? If you asked a medieval person they'd say we turned away from God and are all going to hell.

Expand full comment
Matrice Jacobine's avatar

That's not the best example, whether Christianity is true is a matter of fact, not moral value.

Expand full comment
loving-not-heyting's avatar

This is a pretty popular line about the "Zizians," that they blindly followed the argument wherever it led, all the way to stabbing a landlord with a katana and getting in a shootout with a border patrol agent. If they had just let some common sense temper their dedication to the merciless crystalline logic of Ziz-style timeless decision theory, perhaps this disaster could have been averted.

I have never been clear on what this supposed chain of logic *is*. Sinceriously.fyi, the main primary source on this line of "reasoning," has always seemed to me pretty light on actual argument, and definitely light on rigorous and technically precise argument (as opposed to vague rhetoric obviously intended to evoke the *aesthetic* of cold impersonal reason). The "Patriarchologic" entry in Ziz's glossary has long seemed like the skeleton key to this dialectical fuzziness: laying out your claims clearly and precisely in mathematical fashion is actually basically the same as rape, the *real* rigour is going totally based on vibes. And certainly nobody has ever been able or willing to spell out to me the arguments for Ziz's worldview in clear terms, or even just precisely lay out what exactly the worldview amounts to. So I am very inclined to doubt the "taking ideas seriously" explanation of "Zizianism."

On another note, I think "Don't attack your landlord with a katana" is a pretty terrible side-constraint. There are lots of circumstances where I can imagine it being reasonable to stab a landlord with a katana, in self-defence, say. Which might for all we know have been what happened to Curt Lind in Vallejo in 2022! There are other more unambiguous cases, too, like the one linked below. I am unclear why so much of the public discourse treats the stabbing as in and of itself obviously stupid; there is an obvious story where this was an act of self-preservation, and the only real public evidence we have against that story is the eyewitness testimony of a guy with fading cognitive abilities who, if he had told any other story than the one he did, would have spent the rest of his mortal days languishing behind bars for murder.

https://www.oregonlive.com/crime/2022/11/sword-death-of-portland-landlord-in-slasher-mask-ruled-self-defense.html

Expand full comment
Hoffnung's avatar

I cannot confess having much sympathy for the Zizans at all, but while IMO it's very easy to imagine a situation in which Curt Lind was the initial aggressor and/or was engaging in some kind of intimidation, I think it's a lot harder to imagine a situation in which the stabbing was advisable para-legally or tactically even if it was motivated by self-preservation.

Expand full comment
Matrice Jacobine's avatar

Not going to comment on what conclusions if any to draw from those facts for aforementioned reasons, but it should be noted that everybody focuses on the katana simply because it sounds goofy (even/especially(?) for those of us who knew Somni back on Tumblr and knew about her passion for them), ignoring Lind's injuries were far more severe than the single katana stabbing. Which (obviously I'm not targeting Ozy here) can be part of a broader pattern of sneering at the silly trans nerds in popular coverage of this case which should be considered fairly detestable even from a purely truth-seeking journalist perspective imho.

Expand full comment
Matrice Jacobine's avatar

TBC, are you expressing skepticism that decision theory is a major part of Ziz's doctrine in the first place, or that "taking ideas seriously" is a major reason for people joining her group rather than the usual reasons on Maslow's hierarchy of needs? (Not going to publicly comment on any ongoing legal case for both legal and personal-loyalty reasons.)

Expand full comment
loving-not-heyting's avatar

the latter

Expand full comment
Matrice Jacobine's avatar

I'm inclined to agree, but I don't think it's necessarily in conflict with what Ozy is saying, with the caveat of not defining "taking ideas seriously" as specifically "pay attention to the math" but more broadly "reorienting your entire life because you read an essay", "the inside view", "high openness to ideas" (https://thingofthings.substack.com/p/the-thirty-facets-of-the-big-five), or "cultic milieu" (https://thingofthings.substack.com/p/rationalists-and-the-cultic-milieu, though that's probably my least favorite way of phrasing it, it's a bit circular).

That is, being sucked into a cult require both vulnerability in regard to Maslow needs *and* high openness to ideas. A community which pride itself for optimizing for the latter traits will probably spawn more cults, *as long as* it doesn't do enough to provide for the former.

See Habryka's old post for a more rigorous argument:

https://www.lesswrong.com/posts/HCAyiuZe9wz8tG6EF/my-tentative-best-guess-on-how-eas-and-rationalists

https://forum.effectivealtruism.org/posts/MMM24repKAzYxZqjn/my-tentative-best-guess-on-how-eas-and-rationalists

(In case it was not clear from my unconditional moral support for the indicted, my solution to the conundrum is to not promote less openness to ideas, but do better at providing for the former.)

Expand full comment
JC's avatar

Wow that paper linked to in footnote 5 of the Violet Hour essay is amazing! Fanaticism (in the strong sense used in that paper, where you should accept Pascal's-Mugging-like scenarios) seems clearly wrong, but they point out some bizarre consequences of denying it!

Here's the paper: https://globalprioritiesinstitute.org/wp-content/uploads/Hayden-Wilkinson_In-defence-of-fanaticism.pdf

What do you make of this? How do you feel about extremely small probabilities of extremely high-value events (like a 10^-100 chance of 3||||3 blissful lives)? should we put all EA resources towards those types of scenarios?

Expand full comment
Anonymous Dude's avatar

I have to say, the 'positronium' thing strikes me as a weak move from the rhetorical point of view, too remote from most people's experience. I'd just say 'longshot research on a cure for cancer' or something. That would obviously save millions of lives and is much more easily understandable by a layperson.

Expand full comment
JC's avatar
Mar 20Edited

That wouldn't work because 1) there's already significant research devoted to cancer, so it's hard to tell how much that would change things, and 2) medical research is likely to generate other benefits even if it doesn't achieve the moonshot goal.

In any case, it's just one example in a hypothetical, not intended to be rhetorically optimal. The article wasn't really intended for a layman.

Also, more to the point, the example was supposed to be a very, very large number of lives, or infinite, which is supposed to be remote from experience! That's kind of the point.

Expand full comment
MoltenOak's avatar

Haven't read the paper, but isn't that just the same old theme as Pascals Wager/Mugging? How to deal with extremely small uncertainties for extreme outcomes

Expand full comment
JC's avatar

Bostrom's Pascal's Mugging paper is cited in it, yes.

We don't really know how to deal with that situation. Intuitively, it seems like fanaticism has to be wrong. But if you deny it, you get some bizarre consequences - check out the Wilkinson paper.

Expand full comment
Anonymous Dude's avatar

I just want to say, as someone who really isn't an EA, you've done a great job of explaining the EA philosophy and the way y'all think and how it's different from everyone else. I think I might have been an EA in an earlier, more idealistic time, and probably would have loosely agreed with the point of view of the movement if I'd been exposed to it earlier. I'm not sure how this works for the general public, but I think it works very well as a manifesto and might sway a few people in your general orbit to be closer to your side. Great job!

Expand full comment
Anonymous Dude's avatar

"So how do we take ideas seriously in the good way (where we abolish slavery) and not in the bad way (where we stab our landlords with katanas)?"

And as you nicely point out, that is the hard part. From what I read stabbing the landlord with a katana had more to do with the usual motivations like (clumsily) trying to defend your territory rather than probably justified with a long list of premises that look totally insane to any non-Zizian.

My attitude is simply to accept, qua the evolutionary psychologists, that reason is made for persuasion rather than discovery of truth, and start to question whether you can ever really arrive at the truth to it, and whether that's even worth doing. I think for most people the answer is simply to figure out what tribe is closest to yours and is doing stuff similar to what you want to do (whether that be raising three trad kids or living in a West Coast polycule), and follow them. Of course you don't always know that a priori.

I don't have a tribe, so I just argue with people on Substack. ;)

Expand full comment
RaptorChemist's avatar

I think a

Expand full comment
Matrice Jacobine's avatar

I disagree, I think the

Expand full comment
David Riceman's avatar

How to you harmonize "taking ideas seriously" with using "effective altruists generally" as useful support for an argument. Headcount's seem like the opposite of evaluating ideas on their merits.

Expand full comment
Ozy Brennan's avatar

It's not an argument series, it's a definition series. I don't expect this to convince people, but I hope it will cause them to have a better sense of what they're disagreeing with.

Expand full comment
Anonymous Dude's avatar

It's very well done, thank you!

Expand full comment