This is the first "what is an EA" article that makes me feel like an EA!
I've never signed up for the "you are morally obligated to behave altruistically" part of EA, but your points 1-9 are pretty much what I use to evaluate "what projects are cool/exciting/worth contributing to?"
this is going to be a good series. Looking forward to it. I'm especially interested in this one: "in particular, you should disregard special relationships and moral desert." Because I think that's a good capsulation and clarifies a difference or conflict with care-ethics. Care is built upon relationships -which are, according to care-ethics theorists, definitional to care. And care relationships are special and distinct from other relationships, usually formed out of familial connections or emotional affinity. I'm really interest in care ethics and how they can/do apply to policy. But wrestling with this different with EA will be useful.
Very interesting comment. I feel that care ethics (maybe just maybe embedded in some version of virtue ethics, tho that's a recent thought and I'm not there yet) is much closer to my "personal" morality, but I ALSO think that systemic solutions should be definitely driven by something closer to EA.
the translation of care-ethics into broader or universal policy is one of my conundrums. I think care ethics can be applied systemically, but they may not offer a comprehensive or coherent guide and need to operate on top of or in relation to other schema (like human rights, EA, etc.).
I often favor family over non-family. But since I am embedded in my family, I am uniquely positioned to give them what others cannot. That's the situations I'm thinking of, where I spend my time and effort on family because my unique expertise and relationships with them means the gains to them are greater than they otherwise would be.
As a lighthearted example, I wish for all sisters around the world to be able to have a nice lunch with each other; but I am better positioned to do good by sharing a lunch with my own sister than by flying across the world and having lunch with some random woman.
More seriously, I can do more good by raising my own children than by hiring a nanny for them while I go work as a nanny for someone else's children. As Suzanne Venker says, "There is no one a parent can pay to love a child and to sacrifice for his needs the way a mother will."
Sometimes what people need can be purchased with money, or fulfilled by the expertise of any one of many candidates. That's where moral circle expansionism works.
Sometimes what people need can only be fulfilled by one person. Families are usually like this. Such relationships are less legible, so the usual EA tools like studies shine less light on their importance. But it's clear to me that we need to factor that into our life decisions, even if it means using more intuitive tools to assess utility.
I don't really disagree. But I don't think, objectively, that you can do more good by raising your own children than by hiring a nanny or by raising someone else's kids. It could be true that that's the highest and best use of your time and resources. But it could just as easily be true that more good would be done via other options.
For me, the issue is that I WANT to do good by raising my own kids well. It's a responsibility and also a kind of pleasure or preference. It seems obvious that this is a highest/best case. But I don't think it actually is. Which doesn't mean I shouldn't raise my own kids. But that there's some other dynamic (or ethic) involved in it than pure utilitarian efficiency.
The author seems to recognize this in a later post about moral circle of concern expansion - recognizing that our circle of concern is most natural and intense for those most proximate. But that's not the EA concept.
It's human, and normal to care about your kids MORE than other people and to put a lot more effort and resources into it. But it deserves more scrutiny and theory to understand.
That's probably the most 'autistic' part of it, for better and worse. Take the theory of utilitarianism and greatest-good to its logical conclusion.
Also to me shows the ultimate Christian origins of a lot of this, with all the attempts at world government and the City of God where all nations are united under Christ. I imagine a parallel-universe EA that evolved under Confucian principles would be quite different.
looking forward to the series! I've been thinking about this myself on and off; a few years ago I ran a meetup on "old school EA" and to what extent that's a misnomer. The two mandatory essays I assigned were "Effective Altruism is a Question, not an Ideology" and Holly Elmore's "We are in triage every second of every day". I think if I were to add anything to your list, "triage mindset" would be it!
I also had a lot of fun putting together supplementary readings from the mid-2010s - interested in your take on how much that energy is still around in the community in 2025 vs disappeared vs secret third thing
I (for the most part) love and respect EAs, have been usefully informed by them, and even consider myself "EA-adjacent." Where I differ is only that [and I'm not sure how to say this without sounding a certain gross way, which is not my intent] I consider myself to have my priorities from God, which do include more care for those nearby, and don't include worries like "Will AI cause the singularity?" or "Shouldn't we spend trillions making sure we spread to the stars because eventually life here is doomed?" On a more detailed level, it causes me to focus more on food, physical care, shelter, clothing, etc., and less on, say, vaccines. (I'm very vaccine-positive! It's just not on the list from God so it gets a little bit of a downgrade.)
As far as "those nearby" goes, I am just open to the possibility that God put someone in my path to be helped by me, so for example I donate to food banks near me even though the money would save more lives providing anti-malarial bed nets in Africa.
I feel like this reasoning could really frustrate an atheist, but when I was an atheist it was all I could do to maintain low levels of contributions to EA causes. My total donations to EA causes are actually higher now even though they are a smaller part of my overall donations. Being a Christian pushes me to do more for others even if I do so less efficiently than before (in some sense). There's my meta-consequentialist justification for not being a consequentialist.
Lewis actually talks about the 'Tao' in the Abolition of Man. He points out common threads in moral systems across religions. While I'm sure some anthropologist has done a better job since then, it's an interesting exploration since these are moral 'virtues' that presumably exist across cultures and are therefore likely essential to human flourishing.
As far as Christianity and the "singularity" goes, you might be interested in two works by C.S. Lewis:
- "The Abolition of Man", in which CS Lewis discusses several ideologies that essentially wanted to construct post-human beings, and why this is a morally perilous endeavor. His version of the singularity is more biological in nature, essentially the ability to freely redesign human nature towards some goal. See, particularly, his discussion of the "Conditioners". (Lewis was heavily inspired by early proto-Singulartarian works like Stapledon's "First and Last Men.")
- "That Hideous Strength", a work of fiction which portrays an attempt to create a greater-than-human being, to gain immortality, and to gain the ability to create eternal reward or suffering.
Lewis's critique is based on the idea that humans are not wise enough to try to build a god, and they are not good enough to be trusted to redesign human nature. This is partly a theological criticism, but it follows very similar paths to Yudkowsky's concerns, for example. Lewis saw the fact that humans lacked these powers as a form of divine mercy. If we could redesign minds, we would use that power in horrifying ways.
Yeah, the redesigning of minds is discussed in Lewis's non-fiction. The novel is more about attempting to achieve biological immortality (for a test subject) by deeply sketchy means. Lewis made it clear in his other writings that the novel wasn't intended as a serious philosophical work, but rather as an entertaining dramatization of an argument he had made elsewhere.
I was introduced to EA when I, as a teenager, wrote some post on Reddit about how I was so sad that I felt like nothing could be done to improve the world as an unimpressive layperson. Then one of the only helpful comments were “there’s this thing called effective altruism”.
I got the impression from their main websites that EA was a very polished, uncontroversial, big tent organisation who had the goal of getting as many people as possible to donate a substantial amount to charity, and have those charities be effective. What mostly excited me were some posts about if everyone, or even just the world’s 10% richest donated 10% - how many problems that nearly everyone agree are problems, would be solved. I was so excited for this movement that surely very few could disagree with and align people from so many different persuasions, opinions and perspectives. Imagine so many very ordinary people, changing the world.
Yeah… didn’t turn out to be that. It’s a movement and social club for a very particular kind of person, that now focuses much more on “talent funnelling” than getting new donors. That aspect particularly hurt a little, because I knew at that time that I *wouldn’t* be some great talent, I would never invent a new vaccine or an AI safety mechanism or have a job with an enormous salary. And what attracted me to EA was the message that it’s okay, you can still be very helpful as a normie westerner. Only to discover many of them… kinda despise normies or at least don’t care about them or see them as useful at all. Also they are concentrated in some of the most expensive places in the world, and all university educated, very often from like, Oxford.
And that’s fine, I guess. But I can’t lie and say I wasn’t disappointed.
I really wish there was some sort of movement like what I thought EA was. But I no longer think it’s EA’s “job” or a failure of them as a project to not be what I wanted. It is what it is. And there’s probably also something good in EA as “talent funnelling” and socialising that wouldn’t be there if it was very big tent and neutral.
This is a really common feeling (and something I wrote my Solstice about). I personally hate EA's level of elitism. I understand the arguments for elitism, and am even convinced by them a little bit, but EA would be much better for me if it were open to all EA-minded people regardless of their capabilities. I try to cultivate this attitude in my local community (which is, you know, in SF, so I'm not fully addressing Whenyou's concerns :P).
If it's any consolation, the "great talents" at the Bay area and oxford have mostly succeeding in dragging EA into a series of horrible scandals, and helped to achieve through incompetence the exact opposite of what they wanted in AI safety (an arms race that the west is not clearly winning).
In contrast, normies like you have just been trucking along saving lives. It seems pretty clear that you are the more effective altruist here.
I suspect the "great talents" in things like computer science don't translate into great talents in other aspects of life, just as we wouldn't really expect Bill Clinton or Ronald Reagan to be all that good at computer programming. (Though I suspect Clinton probably could have buckled down and done OK if he had to for some bizarre reason.)
For the record: the "rationalist"/"longtermist" factions have always opposed this. The AGI arms race was/is driven by those who came to EA from a traditional liberal philanthropic background.
As early as January 2019, more than a month before GPT-2 was even announced, he would use Musk's founding of OpenAI as one example of "awful counterproductive policies" in an otherwise unrelated debate (https://x.com/ESYudkowsky/status/1079915734455611397). And in October 2021 (again, more than the year before the release of GPT-3.5 and ChatGPT seriously launching the LLM craze) he would launch a public diatribe on Twitter starting by "Nothing else Elon Musk has done can possibly make up for how hard the "OpenAI" launch trashed humanity's chances of survival" (https://x.com/ESYudkowsky/status/1446562238848847877).
Having conversed with people involved in the PauseAI movement (who I sometimes do think is strategically silly but is at least value-aligned with long-term survival and flourishing of the human Gemeinwesen), while there may still be some EA-adjacent people especially in the rank-and-file, most of the leadership have already declared EA to be a lost cause and hopelessly captured by corporate interests and the national security state. Various other longtermist orgs, including Tegmark's FLI and the various national AI Safety Institutes, are rapidly converging on the same stance.
I remember one post on the EA subreddit by someone who felt bad that they were "only" a park ranger instead of working on a more notable EA cause area. I objected that a park ranger is an altruistic job directly involved in environmental preservation and even if it's not the kind of work that saves a life for $5000, it does more good than most careers. To my surprise, other commenters told me this wasn't a very EA way of thinking!
I can only guess that there are multiple lines of thought about this in EA. The one I'm familiar with encourages people to do their best, but acknowledges individual limits. The whole reason for asking people to donate 10% instead of all their disposable income is that we know demanding that much destroys people. I still think the version of EA I most believe in is one where donating 10% to global prosperity becomes normalized.
Yeah, nothing wrong with being a park ranger. We can't all be finance bros and techies. I think a lot of these guys are way too proud of their top-15 college degrees and grades. Not that there's anything wrong with that, but it doesn't really work as a moral basis.
If you ask me, any definition that does not mention utilitarianism is a bad one. Because that's the main thing making EA unique, and the main thing people object to.
Or at least, it's the common cause behind most of the uniqueness and most of the sticking points.
"These definitions are unsatisfying, because everyone who is trying to be a good person supports those things. No one is against care and compassion. No one wants to do less good than they could for no reason. No one is like “I specifically hate evidence and reason and I’m going to make my charitable decisions using astrological charts.” Tautologically, people don’t use resources they don’t have. "
I was reading this paragraph and thinking "no, no, no, this is Typical Mind Fallacy, and it doesn't look like they are going to explain why it's wrong". so... I went to write my disagreement, then continue to read, to see I don't need to change my comment. and I didn't.
so, if you think that everyone agrees about the core tenets of EA you suffer from metaphorical color blindness. because different people definitely try to do things that are not doing maximum good.
There are a lot of people that are satisfiers, not maximizers. They donate to charity that is good enough. There are EA critics that criticize the idea you should do the most good, and claiming it's bad and hubristic and one shall not maximize.
The other point is you don't have to be against something to not do it. I'm not against driverless cars, and yet, I don't try to make them come sooner with my donations. I'm doing something else instead. making place to the utilitarian voice in my mortal coalition. and other people doing other things.
Some people do what their religion tells them they should do, fulfilling their religious duty. that the same algorithm that makes them not watch TV on Shabbat.
Other people believe people have a duty to their groups, and should prioritize them. There is a spectrum there, and it's far end is telling that wasting money on people in Africa when you have poor in your city is amoral and wrong. "maximum good" is just... not the way they thinking about that.
other people trying to be good people. aka Virtue Ethics. "being food person" is distinctly not about making maximum good.
(those was just different way to say that most people are not consequentialists)
Some people have causes that are close to their heart so they put their resources there, it's not the most important one. but it's THEIRS, and they will work on it, and other people will work on the causes that are dear to them.
There are more different people who I don't understand enough to describe. but. doing the EA thing is actually really rare. Different people who do things that are not EA are not trying to do EA and failing, or disagreeing on what is important. They are doing something else.
People tend to be caught in their framing and bad at understanding that different people are actually different. So it's important to me to point out that it's just not true that "everyone who is trying to be a good person supports those things". A lot of people think that being a good person is something totally different.
I wonder if you could explain EA anthropologically as "a group of people that vibe with each other". I know that's not useful in any way and also applies to just about any other group, but based on the rallying flag post from the slatestarchives (https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/), animal welfare and bed nets and AI risk are the flag, but the secret ingredient is that it's a group of people who believe those things *and* tend to get on well enough with each other that they form a community.
I think this explains the "of course I believe in evidence, but I'm not an EA" people: they might share some or all of the EA beliefs but they don't feel they fit in to the EA/rationalist community. Which is fine, a bed net is a bed net for whatever reason you donate it.
And sadly, a nitpick: "No one is against care and compassion. [...] No one is like I specifically hate evidence and reason". Something something mumble politics grumble.
A lot of people will explicitly reject #2 if you bring it up. My late wife was one of them, basically saying that America and Americans shouldn't try to help poor people in other countries as long as there were still poor people in America that needed help, even if helping Africans took less resources per person helped.
There's also a certain odious individual that made "America First" one of his catchphrases. :/
One other thing EA tends to be is secular; attempting to save souls from eternal damnation by persuading people to convert to the One True Religion is not generally considered an EA activity.
And even reading this initial list made me realise one thing that I've never seen anyone bring up: that I am probably an EA in a sort of... functional/operational/large scale systems way (tho with very strong species-ist reservations regarding the extent of the extension and -- I'm not sure if this is included -- huge question mark over far-out-longtermism). So I think it's close to the optimal way to "do" things like operate charity organisations, allocate international aid funds, and choose where one puts the "rationally driven" part of one's individual charitable contribution.
But -- and I'm saying this after many months, if not a few years, considering these things and a recent near-epiphany -- I'm much much further from an EA in the "general personal morality" sense.
A good analogy here is with epidemiology/public health vs my personal ("clinical") health related decision making. I absolutely feel that rational, probabilistic calculus should guide the former. I'm ABSOLUTELY not going to use it when making my own.
That's awesome! One of my hopes in this series is to open space for people who use an "EA mindset" in certain areas or as one of many approaches to problems.
I'm sympathetic to the "Government House utilitarianism" approach--that utilitarianism works much better as a political philosophy, or a guiding approach for similar large-scale impersonal institutions, than it is as a moral philosophy for individuals. Among other things, it avoids the Demandingness Objection to utilitarianism, which I find persuasive.
I'm curious what you use when making personal health decisions instead of rational probabilistic calculus! Feel free to ignore this or tell me to mind my own business, but if you're okay with sharing I'm very curious.
My first impulse was to just yell "VIBES" but it's not quite that. I suppose I use data but I often include/overlay "weird" / "irrational" personal criteria that are mostly to do with indulging my monkey brain (=vibes) and sometimes with priorities re risk of one kind over another that sometimes don't align with what's typically considered rational. But effectively it means that I often end up off the path recommended by the treatment (or assessment) algorithm/protocol and likely don't optimise my probabilistic outcomes.
Examples:
I was invited for the Astra Zeneca covid vaccine in the spring of whichever year the vaccines were deployed (I was just over 50 then). At that time TTS was identified as a very rare side effect. I absolutely, fully, rationally accepted that these were very very rare (eg rarer than a probability of developing polio after a live polio vaccine that the older of my children got and that I had no doubts about using mostly for pro social reasons). But I also knew that in the mental state I was in, I'd have a freakout with every headache for 10 weeks after that jab. So I looked at my local situation (remote area, locked down, WFH, literally no covid within 40 miles) and my risks and decided to wait until the summer when the Pfizer jabs were made available to the reminder of the general population (= healthy under 50 yo). So this decision was "objectively" irrational based on epidemiology but made sense for me.
Example 2: I don't do mammography breast cancer screening. This one is imo actually a fully rational decision based on data/meta-analysis / numbers needed to treat (and test) combined with zero family history of breast cancer and minimal of other cancers (if I get one, it's likely to be one of other two as I smoked for 20 years and I have fairly severe GERD). BUT it runs against nationwide public health programme and recommendations based on population data, so I guess this one is more a difference between a clinical and epidemiological lens.
Pure vibes: I will NOT even try a SSRI or SNRI for my Broken Brain. Just because. Nope. Just no.
> The effective altruist approach to politics: small-l liberal, capitalist, and technocratic.
This is certainly a *common* stance, but I wouldn't call it distinctive. It's neither unique (see for instance the majority of the democratic party) nor universal (hi).
the majority of the democratic party are liberal, capitalistic and technocratic?!
that... look like the exact opposite from the true. the democratic party look to me progressive (and so oppose to liberal values like freedom of speech and not firing people from their work because of their political opinions) , anti-capitalistic, and anti-technology.
- Employee speech protections have never been particularly strong in the United States. The Red Scare is the obvious example, but history is littered with smaller purges. Of pacifists, of Catholics, of abolitionists... and yet it would be absurd to say that the United States only became a liberal state in the 70s. That's just not how the word is used.
- A single-digit number of self-identified socialists in congress does not an "anti-capitalistic" party make.
- Technocratic does not mean "pro-technology", it means favoring governance by technical professionals. The European Union is the actually existing technocracy par excellence. There's no direct antonym but you could do worse than "electoral".
I'm trying to understand if we describe the same politics with different words, or we have different models of the democratic party.
so, here is a question, if you will go to some people who vote democratic and ask them of they for or against capitalism, what do you think they will answer?
your answer about liberal and technocratic make it sound we agree about what exist and use different words to describe it, but it doesn't work the sane way about capitalism. and i think there is some deeper confusion about what do you mean here.
> so, here is a question, if you will go to some people who vote democratic and ask them of they for or against capitalism, what do you think they will answer?
The most common response will be to look at you funny, because most people don't think about politics in such an abstract idealized way. But once you get past that, you'll get lukewarm support for "capitalism" qua tribal signal and overwhelming majorities in favor of private ownership of the means of production.
Except for the explicitly democratic-socialist minority around Sanders and the Squad (with negligible influence on party policy as the last 16 months have shown), every Democrat from Warren onwards included self-identify as strongly pro-capitalist. If you have a different model of the Democratic Party you are just... completely disconnected from political reality and too steeped into paranoid online "anti-woke" discourse (which is, I acknowledge, are also two common aspects of "effective altruist approach to politics" which Ozy forgot to mention).
I actually AM completely disconnected from USA politics. ( I also trying to be mostly disconnected from my own country politics, but this is harder). I'm totally fine to find out that all the loud anti-capitalism declarations from the democrats, that manage to find me despite my vigilant efforts to avoid it are... not actually as important as they seem, and totally fine with being told I am wrong (though i consider the accusation in paranoia lack in good faith).
so, in you model, the mainstream democrats are capitalists de-facto, but... they never say that?
because, see, in my model of the world, if party is supporting something, it will say that: important people in the party will say "capitalism is good, actually!". this is especially will happen if someone else in the part is saying "capitalism is bad!". and if all the signaling is going in one side, the side that is unsayable will lose.
so it look to me (and we already established i don't actually understand USA politics, so you more then invited to correct me if I'm wrong) that there are two main options:
(1) there are people who are pro-capitalism and democrats, and are saying that. and i didn't hear that because it's not news-worthy or scissory enough. then, there are declarations from Obama and Biden and Harris that capitalism is good and they support it and USA is capitalist country. or,
(2) democrats are anti-capitalists, and there are no such declarations, and there are anti-capitalism declarations from other party members while the candidates try to remain ambiguous about that.
the third option is that they are pro-capitalism, but not saying it and not disagreeing with the more radical elements of their party. and i claim that this situation is unstable and will move toward the second one.
but, I'm still not trying to settle the disagreement, I'm still on the understanding-your-model step.
so, is your model is 1, 2, 3, or something different?
Frankly, "everyone in the US is very pro-capitalism" is a very basic fact about how the world works that I expect anyone up to and including even (especially?) people in Amazonian tribes to be aware of. But considering you asked so nicely.
“I am a capitalist. I am a pragmatic capitalist,” Harris said in the interview at the vice president’s official residence in the Naval Observatory in Washington. “I believe that we need a new generation of leadership in America that actively works with the private sector to build up the new industries of America, to build up small-business owners, to allow us to increase home ownership.” https://www.nbcnews.com/politics/2024-election/harris-says-pragmatic-capitalist-pitch-latino-voters-rcna176702
Joe Biden:
Last week, as President Joe Biden signed “An Executive Order Promoting Competition in the American Economy,” he echoed the language of his predecessors. “[C]ompetition keeps the economy moving and keeps it growing,” he said. “Fair competition is why capitalism has been the world’s greatest force for prosperity and growth…. But what we’ve seen over the past few decades is less competition and more concentration that holds our economy back.” https://publicseminar.org/essays/a-proud-capitalist-joe-biden-is-championing-competition/
"Look, I’m a capitalist. I have no problem with companies making reasonable profits. But not absurd levels on the backs of working families and seniors – it's about basic fairness." https://x.com/POTUS/status/1636400685649371136
Elizabeth Warren:
“I am a capitalist to my bones,” Sen. Warren tells New England Council, one of several instances this morning where she’s highlighted her belief in capitalism and markets while talking bankruptcy policy https://x.com/katielannan/status/1018852303212896257
"I am a capitalist. Come on. I believe in markets. What I don’t believe in is theft, what I don’t believe in is cheating. That’s where the difference is. I love what markets can do, I love what functioning economies can do. They are what make us rich, they are what create opportunity. But only fair markets, markets with rules. Markets without rules is about the rich take it all, it’s about the powerful get all of it. And that’s what’s gone wrong in America." https://www.cnbc.com/2018/07/23/elizabeth-warren-i-am-a-capitalist-but-markets-need-rules.html
"Well, American capitalism is one of the most productive forces ever known to man, and there’s so much that this country has been able to unlock, especially in the last century, in terms of technology, in terms of prosperity. Now where it goes wrong is when it’s only being experienced in certain parts of the country or by certain kinds of people, and I think it goes to show just how important it is for capitalism to work that it be backed by all of the other pieces that business alone can’t solve. But when it’s working right, there’s nothing like it. It’s extraordinary." https://www.cnbc.com/2019/04/12/2020-candidate-pete-buttigieg-on-taxing-the-rich-future-of-us-capitalism.html
Hillary Clinton:
Clinton jumped in, saying: “When I think about capitalism I think about all the business that were started because we have the opportunity and the freedom to do that and to make a good living for themselves and their families… We would be making a grave mistake to turn our backs on what built the greatest middle class in the history.” https://time.com/4072583/democratic-debate-hillary-clinton-bernie-sanders-capitalism/
None of these things get you anywhere near the AI apocalypse stuff. And none of it commits you to crazy (sorry, Parfit) ideas about obligations to potential people who may or may not exist.
Isn't it relatively uncontroversial that, even though they don't yet exist, hurting "our grand children" through climate change is at least somewhat bad, if indeed we are doing that. Stupid conservatives debate whether climate change is happening, and smart conservatives debate whether lowering emissions is worth it given harms to current people, and economists downweight benefits of climate mitigation which are far enough in the future, but few people outright deny that if climate change harms future people that is bad, no? You might think that's wrong, but I think then your the one making the claim that runs counter to common sense for theoretical reasons.
What *is* controversial about Parfit/longtermist EA is the much more specific claim that it is bad if people who would have had good lives don't come into existence at all. But that is much more specific claim than the claim that it is bad if current actions harm children in 2080. Although the only paper I've seem to test what ordinary people think about this found that they were surprisingly sympathetic to the idea that we have some reason to create happy people just because they will be happy: https://globalprioritiesinstitute.org/population-ethical-intuitions-caviola-althaus-mogensen-and-goodwin/ (Mind you, this also shows ordinary people reject a standard objection to average utilitarianism that I've always thought was completely decisive,.)
#9 seems at odds to the basic tenets of EA to me. Capitalism includes a hierarchy, it includes some doing better, at the expense of others. That doesn't seem like it maximizes good for the most people.
I probably should write an essay about this at some point but the region in concept-space you are pointing at with “capitalism” is different from the region in concept-space that Ozy is pointing at with “capitalism”, and this is a common barrier that prevents antiauthoritarian leftists and progressive liberals communicating effectively.
> The initial use of the term "capitalism" in its modern sense is attributed to Louis Blanc in 1850 ("What I call 'capitalism' that is to say the appropriation of capital by some to the exclusion of others") and Pierre-Joseph Proudhon in 1861 ("Economic and social regime in which capital, the source of income, does not generally belong to those who make it work through their labor").[18]: 237 Karl Marx frequently referred to the "capital" and to the "capitalist mode of production" in Das Kapital (1867).[24][25] Marx did not use the form capitalism but instead used capital, capitalist and capitalist mode of production, which appear frequently.[25][26] Due to the word being coined by socialist critics of capitalism, economist and historian Robert Hessen stated that the term "capitalism" itself is a term of disparagement and a misnomer for economic individualism.[27] Bernard Harcourt agrees with the statement that the term is a misnomer, adding that it misleadingly suggests that there is such a thing as "capital" that inherently functions in certain ways and is governed by stable economic laws of its own.[28]
> In the English language, the term "capitalism" first appears, according to the Oxford English Dictionary (OED), in 1854, in the novel The Newcomes by novelist William Makepeace Thackeray, where the word meant "having ownership of capital".[29] Also according to the OED, Carl Adolph Douai, a German American socialist and abolitionist, used the term "private capitalism" in 1863.
The problem is when one person thinks "chemicals" means "lead and arsenic contamination" and another person thinks "chemicals" means "unsafe food dyes" they will have communication difficulties even if both is individually using a valid definition. Etymology is intellectually interesting but using the original definition of a word does not solve communication problem if one's interlocutor doesn't use the same definition.
A good Schelling point is that those who want to say "free markets" can say "free markets" and those who want to say the original meaning of "capitalism" can say "capitalism".
When most EAs think of "capitalism" they generally think of it as "the system that brought the world such things as grocery stores, cars, washing machines, and antibiotics, by setting things up so that ambitious people could more easily accumulate wealth by producing goods and services instead of extracting wealth from others by force."
The appropriate contrast to capitalism isn't Marxism, but rather feudalism: the local rich person is rich because he controls a lot of land and can tax the people who farm it, and if he wants to get richer, the only practical way for him to do it is by getting his hands on land that currently belongs to someone else.
That sounds very backward thinking. Marx saw Capitalism as the natural replacement for Feudalism, and Communism as the natural replacement for Capitalism. I'm not completely on board with his ideas about what comes next, but it doesn't seem like the sort of optimistic, forward thinking approach I tend to expect from EA's to say "we've already figured out the best possible economic system"
Optimism and forward thinking won't get you anywhere without practicality. Considering the track record of attempts at replacing capitalism, is it likely that EAs will be able to come up with a better system, then either take over a country or start their own, and then implement their system without causing unrest, economic collapse, or human rights violations? Probably not. Better to stick with things that have a high chance of success and a low chance of backfiring horribly, like buying bed nets, even if they only work on a small scale.
At least we don't have history books full of stories where someone created a machine god, and it killed millions of people and caused decades of poverty and oppression.
The EAs themselves write books full of stories about why creating a machine god would lead to killing billions of people and causing aeons of astronomical waste.
that is not how I describe Capitalism, and not a way that I expect self-identifying capitalists to describe it.
or, to be more blunt: it's sounds to me like a Straw Man that got repeated so many times that the group repeating it forgot that it's a lie, and distort it's own map.
Is it a straw man if it's an accurate description of the system as it exists in the real world though? Those who support Capitalism tend to idealize it, in similar ways to how Communists idealize Communism. The actual actions of Capitalism regimes demonstrate that the "ideal" Capitalism they describe is not what actually exists, that tends to get Capitalism supporters to start talking about cronyism, but they can't point at a Capitalist regime that doesn't include cronyism.
So can you point to any regime that has ever existed in history that didn't include cronyism? The argument isn't that Capitalism is perfect, but that it's the best system available.
so, the way I understand it, what you write doesn't make sense in context. the context as i see it is that Ozy write EA believe in capitalism, and you wrote "#9 seems at odds to the basic tenets of EA to me. Capitalism includes a hierarchy, it includes some doing better, at the expense of others. That doesn't seem like it maximizes good for the most people."
and it's look to me like failure in theory of mind. because, it's not what capitalists believe in. I have mental model of communists in witch they want to build utopia. i will not say about hypothetical EA communist that communism is at odds with EA because its murderous, because i know that communists don't see themselves as murderous, and want good things. i may say it's go against empiricism, but that a different claim.
and your comment does not demonstrate this understanding of how capitalism see themselves.
you can understand something without agreeing with it. i am not communist. but I'm here not to have the actual argument on capitalism. I'm here to point out this weird failure in theory of mind.
like, you should know what the people you disagree with believe in. do you actually failed to learn what the people you disagree with believe? it's important have good model of the world, including what other people believe. do you know what we believe. but pretend to not, because argument are soldiers, and it's make your argument sound better (it's not, and also, it's shameful thing to do)? or, and this is my leading hypothesis, you did some slippery muddled thinking, and make you counter-arguments to somehow slip into your model of how capitalist think about themselves?
in my understanding a big part of EA is discarding how people imagine a system works in favor of the actual physical manifestation of the system. I would absolutely call out an EA communist for dismissing the USSR, etc as "not real communism" and EA Capitalists need the same treatment.
can you write like, 5 subparagraph, or post, on that? because i have my own model of "what is EA", and this model does not include " discarding how people imagine a system works in favor of the actual physical manifestation of the system", at all.
I can... try to have wild guess and say it come from empiricism, but EA empiricismis, like, see if it better to sell or give for free malaria nets, see it's better for free, and then give them for free. it's work on different, and lower, level of world-modeling, then the level of capitalism and communism.
moreover, in my model, it's important to separate EA from communism vs capitalism level discussions. when EA tried to do prison reforms, it did wrong and DILUTE the special essence of EA. because part of EA , it's Randomasia spirit, it's pulling-sideways, is to avoid that.
i will personally judge communism EA, but it will not jugde in EA-relate way communist EA that is doing the normal EA things - donate to vaccines and malaria nets and to reduce torturing of animals.
and it's look like you not only have model of EA that empathize different ways, but it look like your model actively contradict mine. so... can you explain more your model? (or ask me questions about mine. i don't know what questions ask about yours, because my reaction is mostly ????)
well, to use malaria nets as an example, if malaria nets were routinely treated with a substance that was supposed to repel mosquitos, but did not do so, instead it either attracted them, or was giving the people using the nets some disease, then people would have the belief that malaria nets were making people's lives better, but it wouldn't be true. An effective altruist would look at the data and see that people who had been given nets were actually at higher risk of dying, and wouldn't support distributing malaria nets.
Economic systems are certainly more complex than that, and I can agree with your approach, that it's better to focus on smaller, more practical actions that make a meaningful difference, but that also contradicts the idea that EA's are Capitalist (or technocratic or liberal) For EA's to favor capitalism, or any other large scale, systemic, approach, they need to put aside idealistic ideas about those systems and look at them in practical ways, just as they would with malaria nets. That's difficult when it comes to Capitalism, because it is so omnipresent, but we can see that people had more leisure time under Feudalism, and less homelessness, better access to medical care, etc, under Socialist systems. Defaulting to Capitalism because it's everywhere is as much of a copout as deciding nobody really needs a malaria net, because the ones available are treated with toxic chemicals.
I don't see a contradiction. I mean, ideally everyone would be happy, but since that's currently impossible, it's better for some people to be happy and other people to be miserable than for everyone to be miserable. And every system of economics or government involves a certain amount of inequality.
But does Capitalism actually lead to the most happiness for the most people? Based on statistics from more and less Capitalist countries, it doesn't seem to.
What less capitalist countries? If you're thinking of some European country like Finland, I will point out that that country has a mixed economy, like the United States and most other countries today. Finland is a little less capitalistic than the US, but much more capitalistic than China, Vietnam, or Cuba, all countries I'm glad I don't live in. As far as I can tell, there's only one country in the world with a completely centrally planned economy (not counting black market activity), and that's North Korea. There used to be more, but they either went at least partly capitalist or collapsed.
I think it is an error to think of Capitalism as an unplanned economy. It's an economy controlled by Capital, planned and organized to best benefit those with the most of it.
My impression is that next to no Western liberals actually bother to answer that question, and when they actually bother to answer that question based on actual social science research and not their social milieu's political prejudices they have to admit, despite extreme ideological reluctance, that, yes, economic planning is in fact incommensurably superior to laissez-faire capitalism e.g. https://www.astralcodexten.com/p/book-review-how-asia-works
I read the review you linked, and... oh dear. Scott notes in the opening paragraph that, "In the 1960s, sixty million people died of famine in the Chinese countryside; by the 2010s, that same countryside was criss-crossed with the world's most advanced high-speed rail network, and dotted with high-tech factories." then elides out of either ignorance or malfeasance that the Great Chinese Famine *was caused in large part by central planning.* Given that he later admits he" [doesn't] know much economic history, I'm hoping it's the former.
> When development planning began in China after the revolution (1949) and in India after its independence (1947), both countries were starting from a very low base of economic and social achievement. The gross national product per head in each country was among the lowest in the world, hunger was widespread, the level of illiteracy remarkably high, and life expectancy at birth not far from 40 years. There were many differences between them, but the similarities were quite striking. Since then things have happened in both countries, but the two have moved along quite different routes. A comparison between the achievements of China and India is not easy, but certain contrasts do stand out sharply.
> Perhaps the most striking is the contrast in matters of life and death. Life expectancy at birth in China appears to be firmly in the middle to upper 60s (close to 70 years according to some estimates),1 while that in India seems to be around the middle to upper 50s.2 The under‐5 mortality rate, according to UNICEF statistics, is 47 per thousand in China, and more than three times as much in India, viz. 154.3 The percentage of infants with low birth weight in 1982–3 is reported to be about 6 in China, and five times as much in India.4 Analyses of anthropometric data and morbidity patterns confirm that China has achieved a remarkable transition in health and nutrition.5 No comparable transformation has occurred in India.
> Things have diverged radically in the two countries also in the field of elementary education. The percentage of adult literacy is about 43 in India, and around 69 in China.6 If China and India looked similar in these matters at the middle of this century, they certainly do not do so now.7
> The comparison is not, however, entirely one‐sided. There are skeletons in China's cupboard—millions of them from the disastrous famine of 1958–61. India, in contrast, has not had any large‐scale famine since independence in 1947. We shall take up the question of comparative famine experience later on (in section 11.3), and also a few other problems in China's success story (sections 11.4 and 11.5), but there is little doubt that as far as morbidity, mortality and longevity are concerned, China has a large and decisive lead over India.8
> [...]
> Finally, it is important to note that despite the gigantic size of excess mortality in the Chinese famine, the extra mortality in India from regular deprivation in normal times vastly overshadows the former. Comparing India's death rate of 12 per thousand with China's of 7 per thousand, and applying that difference to the Indian population of 781 million in 1986, we get an estimate of excess normal mortality in India of 3.9 million per year. This implies that every eight years or so more people die in India because of its higher regular death rate than died in China in the gigantic famine of 1958–61.37 India seems to manage to fill its cupboard with more skeletons every eight years than China put there in its years of shame.
It should be noted that Amartya Sen is as center-left as it gets and won a Nobel Prize for his work on famines. Similarly, the examples cited in How Asia Works as just often staunchly pro-US anti-communist regimes as they are Marxist-Leninist ones. EA political orthodoxy on those subjects is only held in the Global South by African warlords under ICC warrant and Colombian politicians under cartel payroll.
Again, your pitch for command economies basically comes down to this:
"You need to put us in charge of the state in order to provide for the common welfare. And yeah, we're going to put the landlords and businessmen in jail... or reeducation camps... or we might just kill them. And also the kulaks. And whoever we identify as a wrecker. And also the work shy, because it's not really a command economy if people can choose not to participate, innit? And also dissidents, and our political opponents, because we can't have agitators counter-signaling the central committee's plan or demoralizing the citizenry. Everyone we don't like, really.
This is all a necessary precondition for land reform and the establishment of a planned economy. And we swear that, unlike all the other times it's gone poorly, we won't engage in petty score settling or use the unlimited power to enrich ourselves and our friends, or get paranoid and blame any failures on outside agitators and fifth columnists.
But once you let us do that... well, we might still accidentally starve millions of people to death. Gotta break a few eggs sometimes! It only happened once, in China. Twice, if you count the Holodomor. OK, maybe in North Korea and a few other places as well, but we've read a lot of theory and are pretty certain it won't happen this time.
Look, even if we do end up killing a lot of people, just a few decades of absolute power will let us develop native industry and living standards to the point that we can use mercantilism to exploit other economies.
BTW, we're also probably going to do several things which, even taken individually, would be first-ballot entries for "greatest ecological disaster of the century." That's just part of development!
What? Have any countries successfuly developed without a planned economy? Sure, but they don't really count, and why would you take that chance when you could personally experience an exciting new iteration of the Great Leap Forward instead?
Listen, if we're wrong, which we won't be, but just hypothetically speaking, after a few decades of absolute power, we'll make a clear-eyed assessment of our track record and voluntarily relenquish power if things aren't going as well as we promised. Pinky swear. We really can get you the China or Korea outcome, rather than ending up a basket case like Mozambique, or Zimbabwe, or Laos, or those other planned economies that also don't really count."
And frankly, given that you put "Jacobin" in your user name, an observer might be forgiven for thinking that you're much more excited about doing the stuff in paragraph 1, and see any economic growth that might occur subsequently as a little side benefit.
Yeah, you're just a deluded ideologue not arguing in good faith or bothering to read my actual argument, fuck off. I did have a chuckle at you being horrified that there are people identifying with Jacobinism... in France. (Even centrists identify with it.)
I’m not actually sure that there is any difference between the definitions that you criticise as primarily meant to persuade and the differences that you mention later. It’s true that the definitions try to present EA as obvious, but they still clearly indicate the points of difference from common sense that you bring up later. For example, most people do not actually make significant use of reason and evidence while making donations and it would be a very unusual person indeed who goes and reads academic papers before giving to charity or even does an intuitive cost benefit analysis. Doing the most good is just presenting consequentialism as obvious, and is in fact, pointing to an important difference between Effective Altruism and normal charity. The kinds of thinking mentioned in the definitions are just very different from normal charity. When most people give to charity, what they think about, is their emotional attachment and ties to either particular causes or individuals and organisations and the vibes. To the extent that the definitions fall short of a perfectly scientific description, it’s mostly by trying not to highlight differences, not by being inaccurate or even leaving stuff out. And honestly, even the points of controversy aren’t very controversial. In my experience, lots of normal people will agree with consequentialism if you argue for it without much discussion being required. They’ll just not think about it much later or apply it in real life and will be equally easy to persuade to a different moral philosophy the next day assuming they even remember that you were arguing for something else yesterday.
To be clear, none of this diminishes the value of your project. Just reading a definition, especially one that does not highlight controversial things. Will obviously not tell you all about a movement. And of course, some differences are less obvious than the movements definition, for example, as a sociological fact, it’s obvious that people in Effective Altruism take ideas way more seriously than the average person. In fact, the biggest reason why most people are hard to persuade to become Effective Altruists is precisely the fact that people generally do not take ideas very seriously and are not interested in moral philosophy and things like that even as a theoretical exercise, much less something to apply in real life. And while you don’t need to be a consequentialist to be an Effective Altruist, if your moral philosophy doesn’t care about consequences to other people or at least other people far away from you, then it’s not very surprising that you’re not part of EA. You at least need to care about consequences among other things for EA to be something that you find compelling. I am just defending the definitions from your criticism here and generally think your post makes a lot of good points regarding how Effective Altruism differs from common sense ideas and don’t disagree at all with those parts.
I feel like this deserves a bit more focus on the EA general attitude of avoiding, uh "political struggle" in the vein of "what do you mean marxist revolution isn't the best EA cause ever".
Having as #1 priority the creation of a machine god to take over the world (well, more specifically, for President Trump to take over the world) is just as much a political struggle as Marxist revolution is. Obviously.
This is the first "what is an EA" article that makes me feel like an EA!
I've never signed up for the "you are morally obligated to behave altruistically" part of EA, but your points 1-9 are pretty much what I use to evaluate "what projects are cool/exciting/worth contributing to?"
this is going to be a good series. Looking forward to it. I'm especially interested in this one: "in particular, you should disregard special relationships and moral desert." Because I think that's a good capsulation and clarifies a difference or conflict with care-ethics. Care is built upon relationships -which are, according to care-ethics theorists, definitional to care. And care relationships are special and distinct from other relationships, usually formed out of familial connections or emotional affinity. I'm really interest in care ethics and how they can/do apply to policy. But wrestling with this different with EA will be useful.
Very interesting comment. I feel that care ethics (maybe just maybe embedded in some version of virtue ethics, tho that's a recent thought and I'm not there yet) is much closer to my "personal" morality, but I ALSO think that systemic solutions should be definitely driven by something closer to EA.
the translation of care-ethics into broader or universal policy is one of my conundrums. I think care ethics can be applied systemically, but they may not offer a comprehensive or coherent guide and need to operate on top of or in relation to other schema (like human rights, EA, etc.).
I often favor family over non-family. But since I am embedded in my family, I am uniquely positioned to give them what others cannot. That's the situations I'm thinking of, where I spend my time and effort on family because my unique expertise and relationships with them means the gains to them are greater than they otherwise would be.
As a lighthearted example, I wish for all sisters around the world to be able to have a nice lunch with each other; but I am better positioned to do good by sharing a lunch with my own sister than by flying across the world and having lunch with some random woman.
More seriously, I can do more good by raising my own children than by hiring a nanny for them while I go work as a nanny for someone else's children. As Suzanne Venker says, "There is no one a parent can pay to love a child and to sacrifice for his needs the way a mother will."
Sometimes what people need can be purchased with money, or fulfilled by the expertise of any one of many candidates. That's where moral circle expansionism works.
Sometimes what people need can only be fulfilled by one person. Families are usually like this. Such relationships are less legible, so the usual EA tools like studies shine less light on their importance. But it's clear to me that we need to factor that into our life decisions, even if it means using more intuitive tools to assess utility.
I don't really disagree. But I don't think, objectively, that you can do more good by raising your own children than by hiring a nanny or by raising someone else's kids. It could be true that that's the highest and best use of your time and resources. But it could just as easily be true that more good would be done via other options.
For me, the issue is that I WANT to do good by raising my own kids well. It's a responsibility and also a kind of pleasure or preference. It seems obvious that this is a highest/best case. But I don't think it actually is. Which doesn't mean I shouldn't raise my own kids. But that there's some other dynamic (or ethic) involved in it than pure utilitarian efficiency.
The author seems to recognize this in a later post about moral circle of concern expansion - recognizing that our circle of concern is most natural and intense for those most proximate. But that's not the EA concept.
It's human, and normal to care about your kids MORE than other people and to put a lot more effort and resources into it. But it deserves more scrutiny and theory to understand.
That's probably the most 'autistic' part of it, for better and worse. Take the theory of utilitarianism and greatest-good to its logical conclusion.
Also to me shows the ultimate Christian origins of a lot of this, with all the attempts at world government and the City of God where all nations are united under Christ. I imagine a parallel-universe EA that evolved under Confucian principles would be quite different.
looking forward to the series! I've been thinking about this myself on and off; a few years ago I ran a meetup on "old school EA" and to what extent that's a misnomer. The two mandatory essays I assigned were "Effective Altruism is a Question, not an Ideology" and Holly Elmore's "We are in triage every second of every day". I think if I were to add anything to your list, "triage mindset" would be it!
I also had a lot of fun putting together supplementary readings from the mid-2010s - interested in your take on how much that energy is still around in the community in 2025 vs disappeared vs secret third thing
link to the meetup + readings: https://www.lesswrong.com/events/MuFag2RFjN3E6Je72/old-school-ea-with-ea-waterloo
I (for the most part) love and respect EAs, have been usefully informed by them, and even consider myself "EA-adjacent." Where I differ is only that [and I'm not sure how to say this without sounding a certain gross way, which is not my intent] I consider myself to have my priorities from God, which do include more care for those nearby, and don't include worries like "Will AI cause the singularity?" or "Shouldn't we spend trillions making sure we spread to the stars because eventually life here is doomed?" On a more detailed level, it causes me to focus more on food, physical care, shelter, clothing, etc., and less on, say, vaccines. (I'm very vaccine-positive! It's just not on the list from God so it gets a little bit of a downgrade.)
As far as "those nearby" goes, I am just open to the possibility that God put someone in my path to be helped by me, so for example I donate to food banks near me even though the money would save more lives providing anti-malarial bed nets in Africa.
I feel like this reasoning could really frustrate an atheist, but when I was an atheist it was all I could do to maintain low levels of contributions to EA causes. My total donations to EA causes are actually higher now even though they are a smaller part of my overall donations. Being a Christian pushes me to do more for others even if I do so less efficiently than before (in some sense). There's my meta-consequentialist justification for not being a consequentialist.
Lewis actually talks about the 'Tao' in the Abolition of Man. He points out common threads in moral systems across religions. While I'm sure some anthropologist has done a better job since then, it's an interesting exploration since these are moral 'virtues' that presumably exist across cultures and are therefore likely essential to human flourishing.
As far as Christianity and the "singularity" goes, you might be interested in two works by C.S. Lewis:
- "The Abolition of Man", in which CS Lewis discusses several ideologies that essentially wanted to construct post-human beings, and why this is a morally perilous endeavor. His version of the singularity is more biological in nature, essentially the ability to freely redesign human nature towards some goal. See, particularly, his discussion of the "Conditioners". (Lewis was heavily inspired by early proto-Singulartarian works like Stapledon's "First and Last Men.")
- "That Hideous Strength", a work of fiction which portrays an attempt to create a greater-than-human being, to gain immortality, and to gain the ability to create eternal reward or suffering.
Lewis's critique is based on the idea that humans are not wise enough to try to build a god, and they are not good enough to be trusted to redesign human nature. This is partly a theological criticism, but it follows very similar paths to Yudkowsky's concerns, for example. Lewis saw the fact that humans lacked these powers as a form of divine mercy. If we could redesign minds, we would use that power in horrifying ways.
I've read and enjoyed both! Great recommendations!
It should be noted that in *That Hideous Strength*, the scientists do not actually manage to redesign a mind.
Yeah, the redesigning of minds is discussed in Lewis's non-fiction. The novel is more about attempting to achieve biological immortality (for a test subject) by deeply sketchy means. Lewis made it clear in his other writings that the novel wasn't intended as a serious philosophical work, but rather as an entertaining dramatization of an argument he had made elsewhere.
Nice to see someone write it out so clearly.
I was introduced to EA when I, as a teenager, wrote some post on Reddit about how I was so sad that I felt like nothing could be done to improve the world as an unimpressive layperson. Then one of the only helpful comments were “there’s this thing called effective altruism”.
I got the impression from their main websites that EA was a very polished, uncontroversial, big tent organisation who had the goal of getting as many people as possible to donate a substantial amount to charity, and have those charities be effective. What mostly excited me were some posts about if everyone, or even just the world’s 10% richest donated 10% - how many problems that nearly everyone agree are problems, would be solved. I was so excited for this movement that surely very few could disagree with and align people from so many different persuasions, opinions and perspectives. Imagine so many very ordinary people, changing the world.
Yeah… didn’t turn out to be that. It’s a movement and social club for a very particular kind of person, that now focuses much more on “talent funnelling” than getting new donors. That aspect particularly hurt a little, because I knew at that time that I *wouldn’t* be some great talent, I would never invent a new vaccine or an AI safety mechanism or have a job with an enormous salary. And what attracted me to EA was the message that it’s okay, you can still be very helpful as a normie westerner. Only to discover many of them… kinda despise normies or at least don’t care about them or see them as useful at all. Also they are concentrated in some of the most expensive places in the world, and all university educated, very often from like, Oxford.
And that’s fine, I guess. But I can’t lie and say I wasn’t disappointed.
I really wish there was some sort of movement like what I thought EA was. But I no longer think it’s EA’s “job” or a failure of them as a project to not be what I wanted. It is what it is. And there’s probably also something good in EA as “talent funnelling” and socialising that wouldn’t be there if it was very big tent and neutral.
This is a really common feeling (and something I wrote my Solstice about). I personally hate EA's level of elitism. I understand the arguments for elitism, and am even convinced by them a little bit, but EA would be much better for me if it were open to all EA-minded people regardless of their capabilities. I try to cultivate this attitude in my local community (which is, you know, in SF, so I'm not fully addressing Whenyou's concerns :P).
If it's any consolation, the "great talents" at the Bay area and oxford have mostly succeeding in dragging EA into a series of horrible scandals, and helped to achieve through incompetence the exact opposite of what they wanted in AI safety (an arms race that the west is not clearly winning).
In contrast, normies like you have just been trucking along saving lives. It seems pretty clear that you are the more effective altruist here.
I suspect the "great talents" in things like computer science don't translate into great talents in other aspects of life, just as we wouldn't really expect Bill Clinton or Ronald Reagan to be all that good at computer programming. (Though I suspect Clinton probably could have buckled down and done OK if he had to for some bizarre reason.)
For the record: the "rationalist"/"longtermist" factions have always opposed this. The AGI arms race was/is driven by those who came to EA from a traditional liberal philanthropic background.
Here is Scott Alexander criticizing OpenAI for fueling the AI arms race in *December 2015*: https://slatestarcodex.com/2015/12/17/should-ai-be-open/
I don't recall MIRI-aligned individuals publicly expressing criticism at that time, but (because?) their own strategy was the complete opposite, based on utmost secrecy (with, tbc, other failure modes: https://lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe), but Yudkowsky would later assert he was horrified and *physically* weeped at the news of OpenAI being founded (https://x.com/ESYudkowsky/status/1637538253724352518) and that this was the exact moment where he "lost all hope for Silicon Valley grownups" (https://x.com/ESYudkowsky/status/1870886619420692716).
As early as January 2019, more than a month before GPT-2 was even announced, he would use Musk's founding of OpenAI as one example of "awful counterproductive policies" in an otherwise unrelated debate (https://x.com/ESYudkowsky/status/1079915734455611397). And in October 2021 (again, more than the year before the release of GPT-3.5 and ChatGPT seriously launching the LLM craze) he would launch a public diatribe on Twitter starting by "Nothing else Elon Musk has done can possibly make up for how hard the "OpenAI" launch trashed humanity's chances of survival" (https://x.com/ESYudkowsky/status/1446562238848847877).
Having conversed with people involved in the PauseAI movement (who I sometimes do think is strategically silly but is at least value-aligned with long-term survival and flourishing of the human Gemeinwesen), while there may still be some EA-adjacent people especially in the rank-and-file, most of the leadership have already declared EA to be a lost cause and hopelessly captured by corporate interests and the national security state. Various other longtermist orgs, including Tegmark's FLI and the various national AI Safety Institutes, are rapidly converging on the same stance.
Why don't you just donate 10% and not worry about what the rest of the movement is like? You can still make a difference as a normie by doing that.
I do that (well, did that, my finances are worse now). I just still wish the movement was different.
I remember one post on the EA subreddit by someone who felt bad that they were "only" a park ranger instead of working on a more notable EA cause area. I objected that a park ranger is an altruistic job directly involved in environmental preservation and even if it's not the kind of work that saves a life for $5000, it does more good than most careers. To my surprise, other commenters told me this wasn't a very EA way of thinking!
I can only guess that there are multiple lines of thought about this in EA. The one I'm familiar with encourages people to do their best, but acknowledges individual limits. The whole reason for asking people to donate 10% instead of all their disposable income is that we know demanding that much destroys people. I still think the version of EA I most believe in is one where donating 10% to global prosperity becomes normalized.
Yeah, nothing wrong with being a park ranger. We can't all be finance bros and techies. I think a lot of these guys are way too proud of their top-15 college degrees and grades. Not that there's anything wrong with that, but it doesn't really work as a moral basis.
If you ask me, any definition that does not mention utilitarianism is a bad one. Because that's the main thing making EA unique, and the main thing people object to.
Or at least, it's the common cause behind most of the uniqueness and most of the sticking points.
That's covered by "welfarist, maximizing consequentialism".
"These definitions are unsatisfying, because everyone who is trying to be a good person supports those things. No one is against care and compassion. No one wants to do less good than they could for no reason. No one is like “I specifically hate evidence and reason and I’m going to make my charitable decisions using astrological charts.” Tautologically, people don’t use resources they don’t have. "
I was reading this paragraph and thinking "no, no, no, this is Typical Mind Fallacy, and it doesn't look like they are going to explain why it's wrong". so... I went to write my disagreement, then continue to read, to see I don't need to change my comment. and I didn't.
so, if you think that everyone agrees about the core tenets of EA you suffer from metaphorical color blindness. because different people definitely try to do things that are not doing maximum good.
There are a lot of people that are satisfiers, not maximizers. They donate to charity that is good enough. There are EA critics that criticize the idea you should do the most good, and claiming it's bad and hubristic and one shall not maximize.
The other point is you don't have to be against something to not do it. I'm not against driverless cars, and yet, I don't try to make them come sooner with my donations. I'm doing something else instead. making place to the utilitarian voice in my mortal coalition. and other people doing other things.
Some people do what their religion tells them they should do, fulfilling their religious duty. that the same algorithm that makes them not watch TV on Shabbat.
Other people believe people have a duty to their groups, and should prioritize them. There is a spectrum there, and it's far end is telling that wasting money on people in Africa when you have poor in your city is amoral and wrong. "maximum good" is just... not the way they thinking about that.
other people trying to be good people. aka Virtue Ethics. "being food person" is distinctly not about making maximum good.
(those was just different way to say that most people are not consequentialists)
Some people have causes that are close to their heart so they put their resources there, it's not the most important one. but it's THEIRS, and they will work on it, and other people will work on the causes that are dear to them.
There are more different people who I don't understand enough to describe. but. doing the EA thing is actually really rare. Different people who do things that are not EA are not trying to do EA and failing, or disagreeing on what is important. They are doing something else.
People tend to be caught in their framing and bad at understanding that different people are actually different. So it's important to me to point out that it's just not true that "everyone who is trying to be a good person supports those things". A lot of people think that being a good person is something totally different.
I wonder if you could explain EA anthropologically as "a group of people that vibe with each other". I know that's not useful in any way and also applies to just about any other group, but based on the rallying flag post from the slatestarchives (https://slatestarcodex.com/2016/04/04/the-ideology-is-not-the-movement/), animal welfare and bed nets and AI risk are the flag, but the secret ingredient is that it's a group of people who believe those things *and* tend to get on well enough with each other that they form a community.
I think this explains the "of course I believe in evidence, but I'm not an EA" people: they might share some or all of the EA beliefs but they don't feel they fit in to the EA/rationalist community. Which is fine, a bed net is a bed net for whatever reason you donate it.
And sadly, a nitpick: "No one is against care and compassion. [...] No one is like I specifically hate evidence and reason". Something something mumble politics grumble.
No kidding. Plenty of religions, both old and new, don't like reason, and plenty of people don't want care and compassion for people they dislike.
A lot of people will explicitly reject #2 if you bring it up. My late wife was one of them, basically saying that America and Americans shouldn't try to help poor people in other countries as long as there were still poor people in America that needed help, even if helping Africans took less resources per person helped.
There's also a certain odious individual that made "America First" one of his catchphrases. :/
One other thing EA tends to be is secular; attempting to save souls from eternal damnation by persuading people to convert to the One True Religion is not generally considered an EA activity.
Really looking forward to this.
And even reading this initial list made me realise one thing that I've never seen anyone bring up: that I am probably an EA in a sort of... functional/operational/large scale systems way (tho with very strong species-ist reservations regarding the extent of the extension and -- I'm not sure if this is included -- huge question mark over far-out-longtermism). So I think it's close to the optimal way to "do" things like operate charity organisations, allocate international aid funds, and choose where one puts the "rationally driven" part of one's individual charitable contribution.
But -- and I'm saying this after many months, if not a few years, considering these things and a recent near-epiphany -- I'm much much further from an EA in the "general personal morality" sense.
A good analogy here is with epidemiology/public health vs my personal ("clinical") health related decision making. I absolutely feel that rational, probabilistic calculus should guide the former. I'm ABSOLUTELY not going to use it when making my own.
That's awesome! One of my hopes in this series is to open space for people who use an "EA mindset" in certain areas or as one of many approaches to problems.
I'm sympathetic to the "Government House utilitarianism" approach--that utilitarianism works much better as a political philosophy, or a guiding approach for similar large-scale impersonal institutions, than it is as a moral philosophy for individuals. Among other things, it avoids the Demandingness Objection to utilitarianism, which I find persuasive.
I'm curious what you use when making personal health decisions instead of rational probabilistic calculus! Feel free to ignore this or tell me to mind my own business, but if you're okay with sharing I'm very curious.
My first impulse was to just yell "VIBES" but it's not quite that. I suppose I use data but I often include/overlay "weird" / "irrational" personal criteria that are mostly to do with indulging my monkey brain (=vibes) and sometimes with priorities re risk of one kind over another that sometimes don't align with what's typically considered rational. But effectively it means that I often end up off the path recommended by the treatment (or assessment) algorithm/protocol and likely don't optimise my probabilistic outcomes.
Examples:
I was invited for the Astra Zeneca covid vaccine in the spring of whichever year the vaccines were deployed (I was just over 50 then). At that time TTS was identified as a very rare side effect. I absolutely, fully, rationally accepted that these were very very rare (eg rarer than a probability of developing polio after a live polio vaccine that the older of my children got and that I had no doubts about using mostly for pro social reasons). But I also knew that in the mental state I was in, I'd have a freakout with every headache for 10 weeks after that jab. So I looked at my local situation (remote area, locked down, WFH, literally no covid within 40 miles) and my risks and decided to wait until the summer when the Pfizer jabs were made available to the reminder of the general population (= healthy under 50 yo). So this decision was "objectively" irrational based on epidemiology but made sense for me.
Example 2: I don't do mammography breast cancer screening. This one is imo actually a fully rational decision based on data/meta-analysis / numbers needed to treat (and test) combined with zero family history of breast cancer and minimal of other cancers (if I get one, it's likely to be one of other two as I smoked for 20 years and I have fairly severe GERD). BUT it runs against nationwide public health programme and recommendations based on population data, so I guess this one is more a difference between a clinical and epidemiological lens.
Pure vibes: I will NOT even try a SSRI or SNRI for my Broken Brain. Just because. Nope. Just no.
That all sounds very reasonable! Thanks for sharing!
> The effective altruist approach to politics: small-l liberal, capitalist, and technocratic.
This is certainly a *common* stance, but I wouldn't call it distinctive. It's neither unique (see for instance the majority of the democratic party) nor universal (hi).
the majority of the democratic party are liberal, capitalistic and technocratic?!
that... look like the exact opposite from the true. the democratic party look to me progressive (and so oppose to liberal values like freedom of speech and not firing people from their work because of their political opinions) , anti-capitalistic, and anti-technology.
- Employee speech protections have never been particularly strong in the United States. The Red Scare is the obvious example, but history is littered with smaller purges. Of pacifists, of Catholics, of abolitionists... and yet it would be absurd to say that the United States only became a liberal state in the 70s. That's just not how the word is used.
- A single-digit number of self-identified socialists in congress does not an "anti-capitalistic" party make.
- Technocratic does not mean "pro-technology", it means favoring governance by technical professionals. The European Union is the actually existing technocracy par excellence. There's no direct antonym but you could do worse than "electoral".
I'm trying to understand if we describe the same politics with different words, or we have different models of the democratic party.
so, here is a question, if you will go to some people who vote democratic and ask them of they for or against capitalism, what do you think they will answer?
your answer about liberal and technocratic make it sound we agree about what exist and use different words to describe it, but it doesn't work the sane way about capitalism. and i think there is some deeper confusion about what do you mean here.
> so, here is a question, if you will go to some people who vote democratic and ask them of they for or against capitalism, what do you think they will answer?
The most common response will be to look at you funny, because most people don't think about politics in such an abstract idealized way. But once you get past that, you'll get lukewarm support for "capitalism" qua tribal signal and overwhelming majorities in favor of private ownership of the means of production.
Except for the explicitly democratic-socialist minority around Sanders and the Squad (with negligible influence on party policy as the last 16 months have shown), every Democrat from Warren onwards included self-identify as strongly pro-capitalist. If you have a different model of the Democratic Party you are just... completely disconnected from political reality and too steeped into paranoid online "anti-woke" discourse (which is, I acknowledge, are also two common aspects of "effective altruist approach to politics" which Ozy forgot to mention).
I actually AM completely disconnected from USA politics. ( I also trying to be mostly disconnected from my own country politics, but this is harder). I'm totally fine to find out that all the loud anti-capitalism declarations from the democrats, that manage to find me despite my vigilant efforts to avoid it are... not actually as important as they seem, and totally fine with being told I am wrong (though i consider the accusation in paranoia lack in good faith).
so, in you model, the mainstream democrats are capitalists de-facto, but... they never say that?
because, see, in my model of the world, if party is supporting something, it will say that: important people in the party will say "capitalism is good, actually!". this is especially will happen if someone else in the part is saying "capitalism is bad!". and if all the signaling is going in one side, the side that is unsayable will lose.
so it look to me (and we already established i don't actually understand USA politics, so you more then invited to correct me if I'm wrong) that there are two main options:
(1) there are people who are pro-capitalism and democrats, and are saying that. and i didn't hear that because it's not news-worthy or scissory enough. then, there are declarations from Obama and Biden and Harris that capitalism is good and they support it and USA is capitalist country. or,
(2) democrats are anti-capitalists, and there are no such declarations, and there are anti-capitalism declarations from other party members while the candidates try to remain ambiguous about that.
the third option is that they are pro-capitalism, but not saying it and not disagreeing with the more radical elements of their party. and i claim that this situation is unstable and will move toward the second one.
but, I'm still not trying to settle the disagreement, I'm still on the understanding-your-model step.
so, is your model is 1, 2, 3, or something different?
Frankly, "everyone in the US is very pro-capitalism" is a very basic fact about how the world works that I expect anyone up to and including even (especially?) people in Amazonian tribes to be aware of. But considering you asked so nicely.
Kamala Harris:
"I've always been and will always be a strong supporter of workers and unions," Harris said to applause. "I also believe we need to engage those who create most of the jobs in America. Look, I am a capitalist. I believe in free and fair markets. I believe in consistent and transparent rules of the road to create a stable business environment." https://eu.usatoday.com/story/news/politics/elections/2024/09/25/kamala-harris-economic-policy-capitalism-business/75356031007/
“I am a capitalist. I am a pragmatic capitalist,” Harris said in the interview at the vice president’s official residence in the Naval Observatory in Washington. “I believe that we need a new generation of leadership in America that actively works with the private sector to build up the new industries of America, to build up small-business owners, to allow us to increase home ownership.” https://www.nbcnews.com/politics/2024-election/harris-says-pragmatic-capitalist-pitch-latino-voters-rcna176702
Joe Biden:
Last week, as President Joe Biden signed “An Executive Order Promoting Competition in the American Economy,” he echoed the language of his predecessors. “[C]ompetition keeps the economy moving and keeps it growing,” he said. “Fair competition is why capitalism has been the world’s greatest force for prosperity and growth…. But what we’ve seen over the past few decades is less competition and more concentration that holds our economy back.” https://publicseminar.org/essays/a-proud-capitalist-joe-biden-is-championing-competition/
"Look, I’m a capitalist. I have no problem with companies making reasonable profits. But not absurd levels on the backs of working families and seniors – it's about basic fairness." https://x.com/POTUS/status/1636400685649371136
Elizabeth Warren:
“I am a capitalist to my bones,” Sen. Warren tells New England Council, one of several instances this morning where she’s highlighted her belief in capitalism and markets while talking bankruptcy policy https://x.com/katielannan/status/1018852303212896257
"I am a capitalist. Come on. I believe in markets. What I don’t believe in is theft, what I don’t believe in is cheating. That’s where the difference is. I love what markets can do, I love what functioning economies can do. They are what make us rich, they are what create opportunity. But only fair markets, markets with rules. Markets without rules is about the rich take it all, it’s about the powerful get all of it. And that’s what’s gone wrong in America." https://www.cnbc.com/2018/07/23/elizabeth-warren-i-am-a-capitalist-but-markets-need-rules.html
Michael Bloomberg:
“I’m as much a capitalist as you will ever find,” he said. “But anyone who believes that unfettered capitalism works hasn’t read history.” https://content.news.harvard.edu/gazette/story/2019/05/michael-bloomberg-extolls-moral-leadership-at-harvards-class-day/
Pete Buttigieg:
“Let’s be very clear,” he said. “This is a capitalist country. The government does not make baby formula, nor should it. Companies make formula.” https://jacobin.com/2022/05/pete-buttigieg-free-market-hungry-baby-formula-capitalism
"I think of myself as progressive. But I also believe in capitalism, but it has to be democratic capitalism." https://www.vox.com/policy-and-politics/2019/3/28/18283925/pete-buttigieg-mayor-pete-interview-capitalism
"Well, American capitalism is one of the most productive forces ever known to man, and there’s so much that this country has been able to unlock, especially in the last century, in terms of technology, in terms of prosperity. Now where it goes wrong is when it’s only being experienced in certain parts of the country or by certain kinds of people, and I think it goes to show just how important it is for capitalism to work that it be backed by all of the other pieces that business alone can’t solve. But when it’s working right, there’s nothing like it. It’s extraordinary." https://www.cnbc.com/2019/04/12/2020-candidate-pete-buttigieg-on-taxing-the-rich-future-of-us-capitalism.html
Hillary Clinton:
Clinton jumped in, saying: “When I think about capitalism I think about all the business that were started because we have the opportunity and the freedom to do that and to make a good living for themselves and their families… We would be making a grave mistake to turn our backs on what built the greatest middle class in the history.” https://time.com/4072583/democratic-debate-hillary-clinton-bernie-sanders-capitalism/
None of these things get you anywhere near the AI apocalypse stuff. And none of it commits you to crazy (sorry, Parfit) ideas about obligations to potential people who may or may not exist.
Isn't it relatively uncontroversial that, even though they don't yet exist, hurting "our grand children" through climate change is at least somewhat bad, if indeed we are doing that. Stupid conservatives debate whether climate change is happening, and smart conservatives debate whether lowering emissions is worth it given harms to current people, and economists downweight benefits of climate mitigation which are far enough in the future, but few people outright deny that if climate change harms future people that is bad, no? You might think that's wrong, but I think then your the one making the claim that runs counter to common sense for theoretical reasons.
What *is* controversial about Parfit/longtermist EA is the much more specific claim that it is bad if people who would have had good lives don't come into existence at all. But that is much more specific claim than the claim that it is bad if current actions harm children in 2080. Although the only paper I've seem to test what ordinary people think about this found that they were surprisingly sympathetic to the idea that we have some reason to create happy people just because they will be happy: https://globalprioritiesinstitute.org/population-ethical-intuitions-caviola-althaus-mogensen-and-goodwin/ (Mind you, this also shows ordinary people reject a standard objection to average utilitarianism that I've always thought was completely decisive,.)
#9 seems at odds to the basic tenets of EA to me. Capitalism includes a hierarchy, it includes some doing better, at the expense of others. That doesn't seem like it maximizes good for the most people.
I probably should write an essay about this at some point but the region in concept-space you are pointing at with “capitalism” is different from the region in concept-space that Ozy is pointing at with “capitalism”, and this is a common barrier that prevents antiauthoritarian leftists and progressive liberals communicating effectively.
https://en.wikipedia.org/wiki/Capitalism#Etymology
> The initial use of the term "capitalism" in its modern sense is attributed to Louis Blanc in 1850 ("What I call 'capitalism' that is to say the appropriation of capital by some to the exclusion of others") and Pierre-Joseph Proudhon in 1861 ("Economic and social regime in which capital, the source of income, does not generally belong to those who make it work through their labor").[18]: 237 Karl Marx frequently referred to the "capital" and to the "capitalist mode of production" in Das Kapital (1867).[24][25] Marx did not use the form capitalism but instead used capital, capitalist and capitalist mode of production, which appear frequently.[25][26] Due to the word being coined by socialist critics of capitalism, economist and historian Robert Hessen stated that the term "capitalism" itself is a term of disparagement and a misnomer for economic individualism.[27] Bernard Harcourt agrees with the statement that the term is a misnomer, adding that it misleadingly suggests that there is such a thing as "capital" that inherently functions in certain ways and is governed by stable economic laws of its own.[28]
> In the English language, the term "capitalism" first appears, according to the Oxford English Dictionary (OED), in 1854, in the novel The Newcomes by novelist William Makepeace Thackeray, where the word meant "having ownership of capital".[29] Also according to the OED, Carl Adolph Douai, a German American socialist and abolitionist, used the term "private capitalism" in 1863.
"Thou shalt not strike terms from others' expressive vocabulary without suitable replacement." https://www.lesswrong.com/posts/H7Rs8HqrwBDque8Ru/expressive-vocabulary
The problem is when one person thinks "chemicals" means "lead and arsenic contamination" and another person thinks "chemicals" means "unsafe food dyes" they will have communication difficulties even if both is individually using a valid definition. Etymology is intellectually interesting but using the original definition of a word does not solve communication problem if one's interlocutor doesn't use the same definition.
A good Schelling point is that those who want to say "free markets" can say "free markets" and those who want to say the original meaning of "capitalism" can say "capitalism".
please write the essay
Here's one that already exists that comes close:
https://www.lesswrong.com/posts/3bfWCPfu9AFspnhvf/traditional-capitalist-values
When most EAs think of "capitalism" they generally think of it as "the system that brought the world such things as grocery stores, cars, washing machines, and antibiotics, by setting things up so that ambitious people could more easily accumulate wealth by producing goods and services instead of extracting wealth from others by force."
The appropriate contrast to capitalism isn't Marxism, but rather feudalism: the local rich person is rich because he controls a lot of land and can tax the people who farm it, and if he wants to get richer, the only practical way for him to do it is by getting his hands on land that currently belongs to someone else.
That sounds very backward thinking. Marx saw Capitalism as the natural replacement for Feudalism, and Communism as the natural replacement for Capitalism. I'm not completely on board with his ideas about what comes next, but it doesn't seem like the sort of optimistic, forward thinking approach I tend to expect from EA's to say "we've already figured out the best possible economic system"
Optimism and forward thinking won't get you anywhere without practicality. Considering the track record of attempts at replacing capitalism, is it likely that EAs will be able to come up with a better system, then either take over a country or start their own, and then implement their system without causing unrest, economic collapse, or human rights violations? Probably not. Better to stick with things that have a high chance of success and a low chance of backfiring horribly, like buying bed nets, even if they only work on a small scale.
Also, see this Slate Star Codex post: https://slatestarcodex.com/2015/09/22/beware-systemic-change/
I think this was more credible a point when the explicit top priority of EAs wasn't creating a machine god to take over the world.
At least we don't have history books full of stories where someone created a machine god, and it killed millions of people and caused decades of poverty and oppression.
The EAs themselves write books full of stories about why creating a machine god would lead to killing billions of people and causing aeons of astronomical waste.
See also: "How to Make Wealth" by Paul Graham
https://paulgraham.com/wealth.html
"at the expense of others"
that is not how I describe Capitalism, and not a way that I expect self-identifying capitalists to describe it.
or, to be more blunt: it's sounds to me like a Straw Man that got repeated so many times that the group repeating it forgot that it's a lie, and distort it's own map.
Is it a straw man if it's an accurate description of the system as it exists in the real world though? Those who support Capitalism tend to idealize it, in similar ways to how Communists idealize Communism. The actual actions of Capitalism regimes demonstrate that the "ideal" Capitalism they describe is not what actually exists, that tends to get Capitalism supporters to start talking about cronyism, but they can't point at a Capitalist regime that doesn't include cronyism.
So can you point to any regime that has ever existed in history that didn't include cronyism? The argument isn't that Capitalism is perfect, but that it's the best system available.
so, the way I understand it, what you write doesn't make sense in context. the context as i see it is that Ozy write EA believe in capitalism, and you wrote "#9 seems at odds to the basic tenets of EA to me. Capitalism includes a hierarchy, it includes some doing better, at the expense of others. That doesn't seem like it maximizes good for the most people."
and it's look to me like failure in theory of mind. because, it's not what capitalists believe in. I have mental model of communists in witch they want to build utopia. i will not say about hypothetical EA communist that communism is at odds with EA because its murderous, because i know that communists don't see themselves as murderous, and want good things. i may say it's go against empiricism, but that a different claim.
and your comment does not demonstrate this understanding of how capitalism see themselves.
you can understand something without agreeing with it. i am not communist. but I'm here not to have the actual argument on capitalism. I'm here to point out this weird failure in theory of mind.
like, you should know what the people you disagree with believe in. do you actually failed to learn what the people you disagree with believe? it's important have good model of the world, including what other people believe. do you know what we believe. but pretend to not, because argument are soldiers, and it's make your argument sound better (it's not, and also, it's shameful thing to do)? or, and this is my leading hypothesis, you did some slippery muddled thinking, and make you counter-arguments to somehow slip into your model of how capitalist think about themselves?
something fourth i didn't think about?
in my understanding a big part of EA is discarding how people imagine a system works in favor of the actual physical manifestation of the system. I would absolutely call out an EA communist for dismissing the USSR, etc as "not real communism" and EA Capitalists need the same treatment.
can you write like, 5 subparagraph, or post, on that? because i have my own model of "what is EA", and this model does not include " discarding how people imagine a system works in favor of the actual physical manifestation of the system", at all.
I can... try to have wild guess and say it come from empiricism, but EA empiricismis, like, see if it better to sell or give for free malaria nets, see it's better for free, and then give them for free. it's work on different, and lower, level of world-modeling, then the level of capitalism and communism.
moreover, in my model, it's important to separate EA from communism vs capitalism level discussions. when EA tried to do prison reforms, it did wrong and DILUTE the special essence of EA. because part of EA , it's Randomasia spirit, it's pulling-sideways, is to avoid that.
i will personally judge communism EA, but it will not jugde in EA-relate way communist EA that is doing the normal EA things - donate to vaccines and malaria nets and to reduce torturing of animals.
and it's look like you not only have model of EA that empathize different ways, but it look like your model actively contradict mine. so... can you explain more your model? (or ask me questions about mine. i don't know what questions ask about yours, because my reaction is mostly ????)
well, to use malaria nets as an example, if malaria nets were routinely treated with a substance that was supposed to repel mosquitos, but did not do so, instead it either attracted them, or was giving the people using the nets some disease, then people would have the belief that malaria nets were making people's lives better, but it wouldn't be true. An effective altruist would look at the data and see that people who had been given nets were actually at higher risk of dying, and wouldn't support distributing malaria nets.
Economic systems are certainly more complex than that, and I can agree with your approach, that it's better to focus on smaller, more practical actions that make a meaningful difference, but that also contradicts the idea that EA's are Capitalist (or technocratic or liberal) For EA's to favor capitalism, or any other large scale, systemic, approach, they need to put aside idealistic ideas about those systems and look at them in practical ways, just as they would with malaria nets. That's difficult when it comes to Capitalism, because it is so omnipresent, but we can see that people had more leisure time under Feudalism, and less homelessness, better access to medical care, etc, under Socialist systems. Defaulting to Capitalism because it's everywhere is as much of a copout as deciding nobody really needs a malaria net, because the ones available are treated with toxic chemicals.
I don't see a contradiction. I mean, ideally everyone would be happy, but since that's currently impossible, it's better for some people to be happy and other people to be miserable than for everyone to be miserable. And every system of economics or government involves a certain amount of inequality.
But does Capitalism actually lead to the most happiness for the most people? Based on statistics from more and less Capitalist countries, it doesn't seem to.
What less capitalist countries? If you're thinking of some European country like Finland, I will point out that that country has a mixed economy, like the United States and most other countries today. Finland is a little less capitalistic than the US, but much more capitalistic than China, Vietnam, or Cuba, all countries I'm glad I don't live in. As far as I can tell, there's only one country in the world with a completely centrally planned economy (not counting black market activity), and that's North Korea. There used to be more, but they either went at least partly capitalist or collapsed.
I think it is an error to think of Capitalism as an unplanned economy. It's an economy controlled by Capital, planned and organized to best benefit those with the most of it.
You could aknowledge that a planned economy causes mass death and suffering, whether that planning is by government beaurocrats or corporate ones...
My impression is that next to no Western liberals actually bother to answer that question, and when they actually bother to answer that question based on actual social science research and not their social milieu's political prejudices they have to admit, despite extreme ideological reluctance, that, yes, economic planning is in fact incommensurably superior to laissez-faire capitalism e.g. https://www.astralcodexten.com/p/book-review-how-asia-works
I read the review you linked, and... oh dear. Scott notes in the opening paragraph that, "In the 1960s, sixty million people died of famine in the Chinese countryside; by the 2010s, that same countryside was criss-crossed with the world's most advanced high-speed rail network, and dotted with high-tech factories." then elides out of either ignorance or malfeasance that the Great Chinese Famine *was caused in large part by central planning.* Given that he later admits he" [doesn't] know much economic history, I'm hoping it's the former.
https://academic.oup.com/book/2070/chapter/141991095
> When development planning began in China after the revolution (1949) and in India after its independence (1947), both countries were starting from a very low base of economic and social achievement. The gross national product per head in each country was among the lowest in the world, hunger was widespread, the level of illiteracy remarkably high, and life expectancy at birth not far from 40 years. There were many differences between them, but the similarities were quite striking. Since then things have happened in both countries, but the two have moved along quite different routes. A comparison between the achievements of China and India is not easy, but certain contrasts do stand out sharply.
> Perhaps the most striking is the contrast in matters of life and death. Life expectancy at birth in China appears to be firmly in the middle to upper 60s (close to 70 years according to some estimates),1 while that in India seems to be around the middle to upper 50s.2 The under‐5 mortality rate, according to UNICEF statistics, is 47 per thousand in China, and more than three times as much in India, viz. 154.3 The percentage of infants with low birth weight in 1982–3 is reported to be about 6 in China, and five times as much in India.4 Analyses of anthropometric data and morbidity patterns confirm that China has achieved a remarkable transition in health and nutrition.5 No comparable transformation has occurred in India.
> Things have diverged radically in the two countries also in the field of elementary education. The percentage of adult literacy is about 43 in India, and around 69 in China.6 If China and India looked similar in these matters at the middle of this century, they certainly do not do so now.7
> The comparison is not, however, entirely one‐sided. There are skeletons in China's cupboard—millions of them from the disastrous famine of 1958–61. India, in contrast, has not had any large‐scale famine since independence in 1947. We shall take up the question of comparative famine experience later on (in section 11.3), and also a few other problems in China's success story (sections 11.4 and 11.5), but there is little doubt that as far as morbidity, mortality and longevity are concerned, China has a large and decisive lead over India.8
> [...]
> Finally, it is important to note that despite the gigantic size of excess mortality in the Chinese famine, the extra mortality in India from regular deprivation in normal times vastly overshadows the former. Comparing India's death rate of 12 per thousand with China's of 7 per thousand, and applying that difference to the Indian population of 781 million in 1986, we get an estimate of excess normal mortality in India of 3.9 million per year. This implies that every eight years or so more people die in India because of its higher regular death rate than died in China in the gigantic famine of 1958–61.37 India seems to manage to fill its cupboard with more skeletons every eight years than China put there in its years of shame.
It should be noted that Amartya Sen is as center-left as it gets and won a Nobel Prize for his work on famines. Similarly, the examples cited in How Asia Works as just often staunchly pro-US anti-communist regimes as they are Marxist-Leninist ones. EA political orthodoxy on those subjects is only held in the Global South by African warlords under ICC warrant and Colombian politicians under cartel payroll.
Again, your pitch for command economies basically comes down to this:
"You need to put us in charge of the state in order to provide for the common welfare. And yeah, we're going to put the landlords and businessmen in jail... or reeducation camps... or we might just kill them. And also the kulaks. And whoever we identify as a wrecker. And also the work shy, because it's not really a command economy if people can choose not to participate, innit? And also dissidents, and our political opponents, because we can't have agitators counter-signaling the central committee's plan or demoralizing the citizenry. Everyone we don't like, really.
This is all a necessary precondition for land reform and the establishment of a planned economy. And we swear that, unlike all the other times it's gone poorly, we won't engage in petty score settling or use the unlimited power to enrich ourselves and our friends, or get paranoid and blame any failures on outside agitators and fifth columnists.
But once you let us do that... well, we might still accidentally starve millions of people to death. Gotta break a few eggs sometimes! It only happened once, in China. Twice, if you count the Holodomor. OK, maybe in North Korea and a few other places as well, but we've read a lot of theory and are pretty certain it won't happen this time.
Look, even if we do end up killing a lot of people, just a few decades of absolute power will let us develop native industry and living standards to the point that we can use mercantilism to exploit other economies.
BTW, we're also probably going to do several things which, even taken individually, would be first-ballot entries for "greatest ecological disaster of the century." That's just part of development!
What? Have any countries successfuly developed without a planned economy? Sure, but they don't really count, and why would you take that chance when you could personally experience an exciting new iteration of the Great Leap Forward instead?
Listen, if we're wrong, which we won't be, but just hypothetically speaking, after a few decades of absolute power, we'll make a clear-eyed assessment of our track record and voluntarily relenquish power if things aren't going as well as we promised. Pinky swear. We really can get you the China or Korea outcome, rather than ending up a basket case like Mozambique, or Zimbabwe, or Laos, or those other planned economies that also don't really count."
And frankly, given that you put "Jacobin" in your user name, an observer might be forgiven for thinking that you're much more excited about doing the stuff in paragraph 1, and see any economic growth that might occur subsequently as a little side benefit.
Yeah, you're just a deluded ideologue not arguing in good faith or bothering to read my actual argument, fuck off. I did have a chuckle at you being horrified that there are people identifying with Jacobinism... in France. (Even centrists identify with it.)
I’m not actually sure that there is any difference between the definitions that you criticise as primarily meant to persuade and the differences that you mention later. It’s true that the definitions try to present EA as obvious, but they still clearly indicate the points of difference from common sense that you bring up later. For example, most people do not actually make significant use of reason and evidence while making donations and it would be a very unusual person indeed who goes and reads academic papers before giving to charity or even does an intuitive cost benefit analysis. Doing the most good is just presenting consequentialism as obvious, and is in fact, pointing to an important difference between Effective Altruism and normal charity. The kinds of thinking mentioned in the definitions are just very different from normal charity. When most people give to charity, what they think about, is their emotional attachment and ties to either particular causes or individuals and organisations and the vibes. To the extent that the definitions fall short of a perfectly scientific description, it’s mostly by trying not to highlight differences, not by being inaccurate or even leaving stuff out. And honestly, even the points of controversy aren’t very controversial. In my experience, lots of normal people will agree with consequentialism if you argue for it without much discussion being required. They’ll just not think about it much later or apply it in real life and will be equally easy to persuade to a different moral philosophy the next day assuming they even remember that you were arguing for something else yesterday.
To be clear, none of this diminishes the value of your project. Just reading a definition, especially one that does not highlight controversial things. Will obviously not tell you all about a movement. And of course, some differences are less obvious than the movements definition, for example, as a sociological fact, it’s obvious that people in Effective Altruism take ideas way more seriously than the average person. In fact, the biggest reason why most people are hard to persuade to become Effective Altruists is precisely the fact that people generally do not take ideas very seriously and are not interested in moral philosophy and things like that even as a theoretical exercise, much less something to apply in real life. And while you don’t need to be a consequentialist to be an Effective Altruist, if your moral philosophy doesn’t care about consequences to other people or at least other people far away from you, then it’s not very surprising that you’re not part of EA. You at least need to care about consequences among other things for EA to be something that you find compelling. I am just defending the definitions from your criticism here and generally think your post makes a lot of good points regarding how Effective Altruism differs from common sense ideas and don’t disagree at all with those parts.
I feel like this deserves a bit more focus on the EA general attitude of avoiding, uh "political struggle" in the vein of "what do you mean marxist revolution isn't the best EA cause ever".
Having as #1 priority the creation of a machine god to take over the world (well, more specifically, for President Trump to take over the world) is just as much a political struggle as Marxist revolution is. Obviously.