47 Comments

This is the first "what is an EA" article that makes me feel like an EA!

I've never signed up for the "you are morally obligated to behave altruistically" part of EA, but your points 1-9 are pretty much what I use to evaluate "what projects are cool/exciting/worth contributing to?"

Expand full comment

this is going to be a good series. Looking forward to it. I'm especially interested in this one: "in particular, you should disregard special relationships and moral desert." Because I think that's a good capsulation and clarifies a difference or conflict with care-ethics. Care is built upon relationships -which are, according to care-ethics theorists, definitional to care. And care relationships are special and distinct from other relationships, usually formed out of familial connections or emotional affinity. I'm really interest in care ethics and how they can/do apply to policy. But wrestling with this different with EA will be useful.

Expand full comment

Very interesting comment. I feel that care ethics (maybe just maybe embedded in some version of virtue ethics, tho that's a recent thought and I'm not there yet) is much closer to my "personal" morality, but I ALSO think that systemic solutions should be definitely driven by something closer to EA.

Expand full comment

the translation of care-ethics into broader or universal policy is one of my conundrums. I think care ethics can be applied systemically, but they may not offer a comprehensive or coherent guide and need to operate on top of or in relation to other schema (like human rights, EA, etc.).

Expand full comment
3dEdited

I (for the most part) love and respect EAs, have been usefully informed by them, and even consider myself "EA-adjacent." Where I differ is only that [and I'm not sure how to say this without sounding a certain gross way, which is not my intent] I consider myself to have my priorities from God, which do include more care for those nearby, and don't include worries like "Will AI cause the singularity?" or "Shouldn't we spend trillions making sure we spread to the stars because eventually life here is doomed?" On a more detailed level, it causes me to focus more on food, physical care, shelter, clothing, etc., and less on, say, vaccines. (I'm very vaccine-positive! It's just not on the list from God so it gets a little bit of a downgrade.)

As far as "those nearby" goes, I am just open to the possibility that God put someone in my path to be helped by me, so for example I donate to food banks near me even though the money would save more lives providing anti-malarial bed nets in Africa.

I feel like this reasoning could really frustrate an atheist, but when I was an atheist it was all I could do to maintain low levels of contributions to EA causes. My total donations to EA causes are actually higher now even though they are a smaller part of my overall donations. Being a Christian pushes me to do more for others even if I do so less efficiently than before (in some sense). There's my meta-consequentialist justification for not being a consequentialist.

Expand full comment

As far as Christianity and the "singularity" goes, you might be interested in two works by C.S. Lewis:

- "The Abolition of Man", in which CS Lewis discusses several ideologies that essentially wanted to construct post-human beings, and why this is a morally perilous endeavor. His version of the singularity is more biological in nature, essentially the ability to freely redesign human nature towards some goal. See, particularly, his discussion of the "Conditioners". (Lewis was heavily inspired by early proto-Singulartarian works like Stapledon's "First and Last Men.")

- "That Hideous Strength", a work of fiction which portrays an attempt to create a greater-than-human being, to gain immortality, and to gain the ability to create eternal reward or suffering.

Lewis's critique is based on the idea that humans are not wise enough to try to build a god, and they are not good enough to be trusted to redesign human nature. This is partly a theological criticism, but it follows very similar paths to Yudkowsky's concerns, for example. Lewis saw the fact that humans lacked these powers as a form of divine mercy. If we could redesign minds, we would use that power in horrifying ways.

Expand full comment

looking forward to the series! I've been thinking about this myself on and off; a few years ago I ran a meetup on "old school EA" and to what extent that's a misnomer. The two mandatory essays I assigned were "Effective Altruism is a Question, not an Ideology" and Holly Elmore's "We are in triage every second of every day". I think if I were to add anything to your list, "triage mindset" would be it!

I also had a lot of fun putting together supplementary readings from the mid-2010s - interested in your take on how much that energy is still around in the community in 2025 vs disappeared vs secret third thing

link to the meetup + readings: https://www.lesswrong.com/events/MuFag2RFjN3E6Je72/old-school-ea-with-ea-waterloo

Expand full comment
3dEdited

Nice to see someone write it out so clearly.

I was introduced to EA when I, as a teenager, wrote some post on Reddit about how I was so sad that I felt like nothing could be done to improve the world as an unimpressive layperson. Then one of the only helpful comments were “there’s this thing called effective altruism”.

I got the impression from their main websites that EA was a very polished, uncontroversial, big tent organisation who had the goal of getting as many people as possible to donate a substantial amount to charity, and have those charities be effective. What mostly excited me were some posts about if everyone, or even just the world’s 10% richest donated 10% - how many problems that nearly everyone agree are problems, would be solved. I was so excited for this movement that surely very few could disagree with and align people from so many different persuasions, opinions and perspectives. Imagine so many very ordinary people, changing the world.

Yeah… didn’t turn out to be that. It’s a movement and social club for a very particular kind of person, that now focuses much more on “talent funnelling” than getting new donors. That aspect particularly hurt a little, because I knew at that time that I *wouldn’t* be some great talent, I would never invent a new vaccine or an AI safety mechanism or have a job with an enormous salary. And what attracted me to EA was the message that it’s okay, you can still be very helpful as a normie westerner. Only to discover many of them… kinda despise normies or at least don’t care about them or see them as useful at all. Also they are concentrated in some of the most expensive places in the world, and all university educated, very often from like, Oxford.

And that’s fine, I guess. But I can’t lie and say I wasn’t disappointed.

I really wish there was some sort of movement like what I thought EA was. But I no longer think it’s EA’s “job” or a failure of them as a project to not be what I wanted. It is what it is. And there’s probably also something good in EA as “talent funnelling” and socialising that wouldn’t be there if it was very big tent and neutral.

Expand full comment

This is a really common feeling (and something I wrote my Solstice about). I personally hate EA's level of elitism. I understand the arguments for elitism, and am even convinced by them a little bit, but EA would be much better for me if it were open to all EA-minded people regardless of their capabilities. I try to cultivate this attitude in my local community (which is, you know, in SF, so I'm not fully addressing Whenyou's concerns :P).

Expand full comment

If it's any consolation, the "great talents" at the Bay area and oxford have mostly succeeding in dragging EA into a series of horrible scandals, and helped to achieve through incompetence the exact opposite of what they wanted in AI safety (an arms race that the west is not clearly winning).

In contrast, normies like you have just been trucking along saving lives. It seems pretty clear that you are the more effective altruist here.

Expand full comment

Why don't you just donate 10% and not worry about what the rest of the movement is like? You can still make a difference as a normie by doing that.

Expand full comment

I do that (well, did that, my finances are worse now). I just still wish the movement was different.

Expand full comment

I remember one post on the EA subreddit by someone who felt bad that they were "only" a park ranger instead of working on a more notable EA cause area. I objected that a park ranger is an altruistic job directly involved in environmental preservation and even if it's not the kind of work that saves a life for $5000, it does more good than most careers. To my surprise, other commenters told me this wasn't a very EA way of thinking!

I can only guess that there are multiple lines of thought about this in EA. The one I'm familiar with encourages people to do their best, but acknowledges individual limits. The whole reason for asking people to donate 10% instead of all their disposable income is that we know demanding that much destroys people. I still think the version of EA I most believe in is one where donating 10% to global prosperity becomes normalized.

Expand full comment

If you ask me, any definition that does not mention utilitarianism is a bad one. Because that's the main thing making EA unique, and the main thing people object to.

Or at least, it's the common cause behind most of the uniqueness and most of the sticking points.

Expand full comment

That's covered by "welfarist, maximizing consequentialism".

Expand full comment
2dEdited

A lot of people will explicitly reject #2 if you bring it up. My late wife was one of them, basically saying that America and Americans shouldn't try to help poor people in other countries as long as there were still poor people in America that needed help, even if helping Africans took less resources per person helped.

There's also a certain odious individual that made "America First" one of his catchphrases. :/

Expand full comment

One other thing EA tends to be is secular; attempting to save souls from eternal damnation by persuading people to convert to the One True Religion is not generally considered an EA activity.

Expand full comment

Really looking forward to this.

And even reading this initial list made me realise one thing that I've never seen anyone bring up: that I am probably an EA in a sort of... functional/operational/large scale systems way (tho with very strong species-ist reservations regarding the extent of the extension and -- I'm not sure if this is included -- huge question mark over far-out-longtermism). So I think it's close to the optimal way to "do" things like operate charity organisations, allocate international aid funds, and choose where one puts the "rationally driven" part of one's individual charitable contribution.

But -- and I'm saying this after many months, if not a few years, considering these things and a recent near-epiphany -- I'm much much further from an EA in the "general personal morality" sense.

A good analogy here is with epidemiology/public health vs my personal ("clinical") health related decision making. I absolutely feel that rational, probabilistic calculus should guide the former. I'm ABSOLUTELY not going to use it when making my own.

Expand full comment

That's awesome! One of my hopes in this series is to open space for people who use an "EA mindset" in certain areas or as one of many approaches to problems.

Expand full comment

I'm sympathetic to the "Government House utilitarianism" approach--that utilitarianism works much better as a political philosophy, or a guiding approach for similar large-scale impersonal institutions, than it is as a moral philosophy for individuals. Among other things, it avoids the Demandingness Objection to utilitarianism, which I find persuasive.

Expand full comment

I'm curious what you use when making personal health decisions instead of rational probabilistic calculus! Feel free to ignore this or tell me to mind my own business, but if you're okay with sharing I'm very curious.

Expand full comment

My first impulse was to just yell "VIBES" but it's not quite that. I suppose I use data but I often include/overlay "weird" / "irrational" personal criteria that are mostly to do with indulging my monkey brain (=vibes) and sometimes with priorities re risk of one kind over another that sometimes don't align with what's typically considered rational. But effectively it means that I often end up off the path recommended by the treatment (or assessment) algorithm/protocol and likely don't optimise my probabilistic outcomes.

Examples:

I was invited for the Astra Zeneca covid vaccine in the spring of whichever year the vaccines were deployed (I was just over 50 then). At that time TTS was identified as a very rare side effect. I absolutely, fully, rationally accepted that these were very very rare (eg rarer than a probability of developing polio after a live polio vaccine that the older of my children got and that I had no doubts about using mostly for pro social reasons). But I also knew that in the mental state I was in, I'd have a freakout with every headache for 10 weeks after that jab. So I looked at my local situation (remote area, locked down, WFH, literally no covid within 40 miles) and my risks and decided to wait until the summer when the Pfizer jabs were made available to the reminder of the general population (= healthy under 50 yo). So this decision was "objectively" irrational based on epidemiology but made sense for me.

Example 2: I don't do mammography breast cancer screening. This one is imo actually a fully rational decision based on data/meta-analysis / numbers needed to treat (and test) combined with zero family history of breast cancer and minimal of other cancers (if I get one, it's likely to be one of other two as I smoked for 20 years and I have fairly severe GERD). BUT it runs against nationwide public health programme and recommendations based on population data, so I guess this one is more a difference between a clinical and epidemiological lens.

Pure vibes: I will NOT even try a SSRI or SNRI for my Broken Brain. Just because. Nope. Just no.

Expand full comment

That all sounds very reasonable! Thanks for sharing!

Expand full comment

"These definitions are unsatisfying, because everyone who is trying to be a good person supports those things. No one is against care and compassion. No one wants to do less good than they could for no reason. No one is like “I specifically hate evidence and reason and I’m going to make my charitable decisions using astrological charts.” Tautologically, people don’t use resources they don’t have. "

I was reading this paragraph and thinking "no, no, no, this is Typical Mind Fallacy, and it doesn't look like they are going to explain why it's wrong". so... I went to write my disagreement, then continue to read, to see I don't need to change my comment. and I didn't.

so, if you think that everyone agrees about the core tenets of EA you suffer from metaphorical color blindness. because different people definitely try to do things that are not doing maximum good.

There are a lot of people that are satisfiers, not maximizers. They donate to charity that is good enough. There are EA critics that criticize the idea you should do the most good, and claiming it's bad and hubristic and one shall not maximize.

The other point is you don't have to be against something to not do it. I'm not against driverless cars, and yet, I don't try to make them come sooner with my donations. I'm doing something else instead. making place to the utilitarian voice in my mortal coalition. and other people doing other things.

Some people do what their religion tells them they should do, fulfilling their religious duty. that the same algorithm that makes them not watch TV on Shabbat.

Other people believe people have a duty to their groups, and should prioritize them. There is a spectrum there, and it's far end is telling that wasting money on people in Africa when you have poor in your city is amoral and wrong. "maximum good" is just... not the way they thinking about that.

other people trying to be good people. aka Virtue Ethics. "being food person" is distinctly not about making maximum good.

(those was just different way to say that most people are not consequentialists)

Some people have causes that are close to their heart so they put their resources there, it's not the most important one. but it's THEIRS, and they will work on it, and other people will work on the causes that are dear to them.

There are more different people who I don't understand enough to describe. but. doing the EA thing is actually really rare. Different people who do things that are not EA are not trying to do EA and failing, or disagreeing on what is important. They are doing something else.

People tend to be caught in their framing and bad at understanding that different people are actually different. So it's important to me to point out that it's just not true that "everyone who is trying to be a good person supports those things". A lot of people think that being a good person is something totally different.

Expand full comment
1dEdited

> The effective altruist approach to politics: small-l liberal, capitalist, and technocratic.

This is certainly a *common* stance, but I wouldn't call it distinctive. It's neither unique (see for instance the majority of the democratic party) nor universal (hi).

Expand full comment

the majority of the democratic party are liberal, capitalistic and technocratic?!

that... look like the exact opposite from the true. the democratic party look to me progressive (and so oppose to liberal values like freedom of speech and not firing people from their work because of their political opinions) , anti-capitalistic, and anti-technology.

Expand full comment
17hEdited

- Employee speech protections have never been particularly strong in the United States. The Red Scare is the obvious example, but history is littered with smaller purges. Of pacifists, of Catholics, of abolitionists... and yet it would be absurd to say that the United States only became a liberal state in the 70s. That's just not how the word is used.

- A single-digit number of self-identified socialists in congress does not an "anti-capitalistic" party make.

- Technocratic does not mean "pro-technology", it means favoring governance by technical professionals. The European Union is the actually existing technocracy par excellence. There's no direct antonym but you could do worse than "electoral".

Expand full comment

I'm trying to understand if we describe the same politics with different words, or we have different models of the democratic party.

so, here is a question, if you will go to some people who vote democratic and ask them of they for or against capitalism, what do you think they will answer?

your answer about liberal and technocratic make it sound we agree about what exist and use different words to describe it, but it doesn't work the sane way about capitalism. and i think there is some deeper confusion about what do you mean here.

Expand full comment

> so, here is a question, if you will go to some people who vote democratic and ask them of they for or against capitalism, what do you think they will answer?

The most common response will be to look at you funny, because most people don't think about politics in such an abstract idealized way. But once you get past that, you'll get lukewarm support for "capitalism" qua tribal signal and overwhelming majorities in favor of private ownership of the means of production.

Expand full comment

Except for the explicitly democratic-socialist minority around Sanders and the Squad (with negligible influence on party policy as the last 16 months have shown), every Democrat from Warren onwards included self-identify as strongly pro-capitalist. If you have a different model of the Democratic Party you are just... completely disconnected from political reality and too steeped into paranoid online "anti-woke" discourse (which is, I acknowledge, are also two common aspects of "effective altruist approach to politics" which Ozy forgot to mention).

Expand full comment

None of these things get you anywhere near the AI apocalypse stuff. And none of it commits you to crazy (sorry, Parfit) ideas about obligations to potential people who may or may not exist.

Expand full comment

#9 seems at odds to the basic tenets of EA to me. Capitalism includes a hierarchy, it includes some doing better, at the expense of others. That doesn't seem like it maximizes good for the most people.

Expand full comment

I probably should write an essay about this at some point but the region in concept-space you are pointing at with “capitalism” is different from the region in concept-space that Ozy is pointing at with “capitalism”, and this is a common barrier that prevents antiauthoritarian leftists and progressive liberals communicating effectively.

Expand full comment

https://en.wikipedia.org/wiki/Capitalism#Etymology

> The initial use of the term "capitalism" in its modern sense is attributed to Louis Blanc in 1850 ("What I call 'capitalism' that is to say the appropriation of capital by some to the exclusion of others") and Pierre-Joseph Proudhon in 1861 ("Economic and social regime in which capital, the source of income, does not generally belong to those who make it work through their labor").[18]: 237  Karl Marx frequently referred to the "capital" and to the "capitalist mode of production" in Das Kapital (1867).[24][25] Marx did not use the form capitalism but instead used capital, capitalist and capitalist mode of production, which appear frequently.[25][26] Due to the word being coined by socialist critics of capitalism, economist and historian Robert Hessen stated that the term "capitalism" itself is a term of disparagement and a misnomer for economic individualism.[27] Bernard Harcourt agrees with the statement that the term is a misnomer, adding that it misleadingly suggests that there is such a thing as "capital" that inherently functions in certain ways and is governed by stable economic laws of its own.[28]

> In the English language, the term "capitalism" first appears, according to the Oxford English Dictionary (OED), in 1854, in the novel The Newcomes by novelist William Makepeace Thackeray, where the word meant "having ownership of capital".[29] Also according to the OED, Carl Adolph Douai, a German American socialist and abolitionist, used the term "private capitalism" in 1863.

"Thou shalt not strike terms from others' expressive vocabulary without suitable replacement." https://www.lesswrong.com/posts/H7Rs8HqrwBDque8Ru/expressive-vocabulary

Expand full comment

please write the essay

Expand full comment

Here's one that already exists that comes close:

https://www.lesswrong.com/posts/3bfWCPfu9AFspnhvf/traditional-capitalist-values

Expand full comment

When most EAs think of "capitalism" they generally think of it as "the system that brought the world such things as grocery stores, cars, washing machines, and antibiotics, by setting things up so that ambitious people could more easily accumulate wealth by producing goods and services instead of extracting wealth from others by force."

The appropriate contrast to capitalism isn't Marxism, but rather feudalism: the local rich person is rich because he controls a lot of land and can tax the people who farm it, and if he wants to get richer, the only practical way for him to do it is by getting his hands on land that currently belongs to someone else.

Expand full comment

See also: "How to Make Wealth" by Paul Graham

https://paulgraham.com/wealth.html

Expand full comment

I'd say most western thinkers at some point in their development note the various flaws of capitalism and think, "well, what about some sort of planned economy instead that maximizes good for the most people?" Then they look at the track record of people attempting to implement those at scale in real life, take a sharp breath, and decide maybe we should stick with capitalism for the time being.

Expand full comment

My impression is that next to no Western liberals actually bother to answer that question, and when they actually bother to answer that question based on actual social science research and not their social milieu's political prejudices they have to admit, despite extreme ideological reluctance, that, yes, economic planning is in fact incommensurably superior to laissez-faire capitalism e.g. https://www.astralcodexten.com/p/book-review-how-asia-works

Expand full comment

I read the review you linked, and... oh dear. Scott notes in the opening paragraph that, "In the 1960s, sixty million people died of famine in the Chinese countryside; by the 2010s, that same countryside was criss-crossed with the world's most advanced high-speed rail network, and dotted with high-tech factories." then elides out of either ignorance or malfeasance that the Great Chinese Famine *was caused in large part by central planning.* Given that he later admits he" [doesn't] know much economic history, I'm hoping it's the former.

Expand full comment

"at the expense of others"

that is not how I describe Capitalism, and not a way that I expect self-identifying capitalists to describe it.

or, to be more blunt: it's sounds to me like a Straw Man that got repeated so many times that the group repeating it forgot that it's a lie, and distort it's own map.

Expand full comment

I don't see a contradiction. I mean, ideally everyone would be happy, but since that's currently impossible, it's better for some people to be happy and other people to be miserable than for everyone to be miserable. And every system of economics or government involves a certain amount of inequality.

Expand full comment

I’m not actually sure that there is any difference between the definitions that you criticise as primarily meant to persuade and the differences that you mention later. It’s true that the definitions try to present EA as obvious, but they still clearly indicate the points of difference from common sense that you bring up later. For example, most people do not actually make significant use of reason and evidence while making donations and it would be a very unusual person indeed who goes and reads academic papers before giving to charity or even does an intuitive cost benefit analysis. Doing the most good is just presenting consequentialism as obvious, and is in fact, pointing to an important difference between Effective Altruism and normal charity. The kinds of thinking mentioned in the definitions are just very different from normal charity. When most people give to charity, what they think about, is their emotional attachment and ties to either particular causes or individuals and organisations and the vibes. To the extent that the definitions fall short of a perfectly scientific description, it’s mostly by trying not to highlight differences, not by being inaccurate or even leaving stuff out. And honestly, even the points of controversy aren’t very controversial. In my experience, lots of normal people will agree with consequentialism if you argue for it without much discussion being required. They’ll just not think about it much later or apply it in real life and will be equally easy to persuade to a different moral philosophy the next day assuming they even remember that you were arguing for something else yesterday.

To be clear, none of this diminishes the value of your project. Just reading a definition, especially one that does not highlight controversial things. Will obviously not tell you all about a movement. And of course, some differences are less obvious than the movements definition, for example, as a sociological fact, it’s obvious that people in Effective Altruism take ideas way more seriously than the average person. In fact, the biggest reason why most people are hard to persuade to become Effective Altruists is precisely the fact that people generally do not take ideas very seriously and are not interested in moral philosophy and things like that even as a theoretical exercise, much less something to apply in real life. And while you don’t need to be a consequentialist to be an Effective Altruist, if your moral philosophy doesn’t care about consequences to other people or at least other people far away from you, then it’s not very surprising that you’re not part of EA. You at least need to care about consequences among other things for EA to be something that you find compelling. I am just defending the definitions from your criticism here and generally think your post makes a lot of good points regarding how Effective Altruism differs from common sense ideas and don’t disagree at all with those parts.

Expand full comment
2dEdited

I wonder if we might weaken philosophical commitments (1)+(2) to something like: effective altruists fully accept Singer's drowning child argument and apply its implications to their lives. This might reconcile the apparent tension in the comments between people's different moral philosophies and the consensus that EA is the right framework for thinking about doing good in the world. Looking forward to the series.

Expand full comment

well, I am effective altruist, and i wrote a number of posts about why u disagree with Singer's drowning child argument.

I actually think that living in a world of drowning children without denial is important part of EA. but i deeply dislike this argument, and actually read more EA writing against it the for it - the one time o read post that actually support it, i was surprised.

one of the conclusions of my series of post was that Singer was so influential not because his argument is good, but because he informed people about important thing: there are drwoning children around, and you can save them, for a small cost. (all the "obligation" stuff may actually be net-negative.)

Expand full comment

I wonder if you could explain EA anthropologically as "a group of people that vibe with each other". I know that's not useful in any way and also applies to just about any other group, but based on the rallying flag post from the slatestarchives (https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/), animal welfare and bed nets and AI risk are the flag, but the secret ingredient is that it's a group of people who believe those things *and* tend to get on well enough with each other that they form a community.

I think this explains the "of course I believe in evidence, but I'm not an EA" people: they might share some or all of the EA beliefs but they don't feel they fit in to the EA/rationalist community. Which is fine, a bed net is a bed net for whatever reason you donate it.

And sadly, a nitpick: "No one is against care and compassion. [...] No one is like I specifically hate evidence and reason". Something something mumble politics grumble.

Expand full comment