7 Comments

This seems mostly only applicable to people who themselves are engaged with, or part of, the weird parts of "the community." If I'm someone who thinks all the longtermist/X-risk/AI risk/etc. people are wrong and annoying, and I only worry about and donate to global health, why should I have to talk about X-risk upfront, any more than a Unitarian should have to talk about the weird parts of Mormon doctrine? like, "so here's this idea I think is really important, and here's these other people who also claim that the idea is really important although they mean something completely different by it"? I guess maybe I should just stop using the phrase "effective altruism" when I talk to people about it...

Expand full comment

If we don't want random people to think we're weird, we can still say stuff like "well, I don't like being required to do such-and-such fundraiser, I'm picky about charities and I mostly donate to disease prevention in developing countries" if it's true, right?

Expand full comment

Typo: "“You know how your computer does exactly what you tell it to, even when you want it to do something else? " - I assume that should be "doesn't."

Expand full comment

I'm confused by your reference to "the four primary cause areas." I've been pretty active in EA for the last couple of years, and if I'm trying to list out primary cause areas, I've got AI safety, biosecurity, nuclear security, animal welfare (which can be split between wild and farmed), global health, global poverty, mental health, improving institutional decision making, and community building. DC EA even has a cause area group, on par with all the others, for YIMBY. And I might have forgotten something. I don't see an obvious grouping of four. And I'm not aware of any canonical source listing out a specific four. What do you think the four are?

Expand full comment

I got into EA based on Famine, Affluence, and Morality and started focusing entirely on global health and development. Over years, I became convinced of the case for x-risk prevention, which I now work on. I am unequivocally glad this happened, I don't think I could have been convinced of x-risk directly, and don't see it as a case of having been "milk before meat'-ed.

I think the reason for this is that it is basically impossible to be an EA, even very new to EA, and not know that EAs consider x-risk to be important. Even when it's not the case made in a recruitment attempt, it's extremely far from hidden information. In many cases, the only thing non-EAs (especially EA critics) know about EA is that EAs care about x-risks.

There should be a natural skepticism around concluding "things are optimal and no changes are necessary". I think, at the very least, new members should be warned that many people start out focused on global health and over time shift to x-risk. However, I don't think that trajectory itself being common is a bad sign for EA, or revealing of a need to shift recruitment tactics.

Expand full comment

“if you’re talking to someone about effective altruism, be extremely conscientious about representing all four primary cause areas.”

There are four primary cause areas? I’ve always thought EA was divided into three between global poverty, animal welfare and existential risk.

Expand full comment