I count mussels and oysters as meat-plants, but not clams. Clams are capable of motility! My reasoning for this as the relevant boundary here is the educated guess that 'pain is evolutionarily expensive'. If an animal can't move, it probably doesn't have a sophisticated pain response. If an animal *can* move then I'm inclined to give it the benefit of the doubt. Still, their neurons number in the thousands so this isn't exactly an issue of serious concern to me, there's only so much pain that can be going on in a system that simple.
Sentience is more about movement than sensation. Plants have a similar avoid/approach "experience" as the lowest beings with a movable body, but no ability to act to change their situation.
Less than a mussel (surely if nothing else, bacteria would also live within and on mussels), unsure about trees and not very interested in the question. I think on the 'pain is evolutionarily costly' theory it probably would be very costly for something as small as a bacterium and it could probably accomplish something similar without pain? Like, motility is necessary but not sufficient for establishing that a creature can feel pain in my model
I think the caring about the individual vs. groups difference is a pretty fundamental one.
Many political causes are not saying "I care more about people in this category", but rather "I care about this category as its own object". This is why you get people so concerned about "the great replacement" and genocide, while much less concerned about individual murder of larger numbers of people in larger demographic groups.
And the same applies to non-humans. A typical environmentalist finds the death and suffering of millions of pigeons basically irrelevant, but the extinction of pigeons would be a big deal. Thus we get these laws about how companies must protect the existence of *species*, but are free to harm as many *animals* as they want.
A big part of effective altruists' "weirdness" comes from not caring about preserving groups at all, and they focus only on the largest numbers of sufferers. These naturally tend to come from the most common species, which normal people care the *least* about.
There's also the whole aspect that group-thinking is pretty normal (which doesn't mean it's right necessarily). We evolved to get into groups and fight wars with other groups (just look around the world), so the effective altruist perspective (which tends to be the extreme form of the universalist 'there is no outgroup') is really weird to most people.
I'm not sure there's evidence for a difference between EA and non-EA people when it comes to marginalised groups here. (I get that you are unsure about this point, but I want to push back anyway). Note that the survey was asking participants to rank their "obligation" to show moral concern for each group, from 1 (absolutely no obligation) to 9 (very strong obligation).
When people say they feel a stronger obligation to care for a mentally challenged person than for a random citizen of their country, I assume they're reasoning is something like "mentally challenged people have difficulty taking care of themselves, therefore there is a higher obligation for other people to step in and compensate for this". I suppose some people might see mentally challenged people as more "morally pure" or whatever, but for most it seems to be more about treating people as equally worthy, and trying to lift them up to the same level.
Similarly when people are (very slightly) preferring homosexuals to random citizens, I assume this is also coming from a place of "these people are discriminated against, so we have a slightly higher obligation towards them in order to compensate for this".
EA's absolutely do feel more obligated towards people in the third world compared to the first: it's reflected in their donations.
Yes, the reason for the strong mutual hostility between EAs and "the Woke" (SJs thereafter) isn't due to differences in evaluating circles of concern (both are ostensibly universalist) but how this concern is expressed. EAs tend to view maximizing welfare as the sole moral concern, which SJs view as paternalistic or even neo-colonialist. SJs tend to view emancipation of marginalized people from structural oppression as the main concern, which EAs view as an inefficient use of resources based on qualitative sociology rather than objective neoclassical economics.
I think we're kind of rationalizing after the fact. (This is, after all, an offshoot of rationalism. ;) )
A lot of SJ rhetoric and ideology relies on *seeing your own group as the outgroup* (at least if white/male/etc) in order to correct for historic injustices. There's also a lot of using prior Christian framing about sin and punishment and repentance. EAs don't do that, from what I can tell.
I think I qualify as a "normie" EA. I identify with the movement, donate to EA charities, read some blogs, but I'm not otherwise involved in the subculture at all.
Your first few EA posts seemed dead-on to me, but this one doesn't feel like it's describing a belief system that I ascribe to, or that I've gotten the impression is the strong consensus belief of the community. I think moral circle expansionism is certainly a core EA belief, and your final point, if a stranger is dying in horrible pain, we should do something to help them also rings very true.
You mentioned at the beginning that the goal was an anthropological description and not a manifesto. Would three specific moral circles, a complete rejection of moral desert and a complete rejection of consideration by species be a belief that the vast majority of EAs would endorse? I think certainly as a movement it holds to those concepts much less strongly than the general public, but I'm not sure most reject them completely or even hold a consistent philosophical opinion there, unlike, say consequentialism which seems integral to the movement.
Or it could just be that I've gotten the wrong impression, given it's not something front and center in the marketing.
I've been involved in the subculture more deeply than you, and you're right that it would overstate to call the absolutist version of this position a strong consensus. I think many EAs care more about stranger humans than they do about animals, even accounting for the size of the welfare buckets, in part due to moral uncertainty and to self-awareness about the especially strong weirdness of caring about wild insects suffering, etc. They may have a 4th circle for animals that is *still way closer and less discriminatory among animals* than that of normal people, but distinct nonetheless. Likewise, I think the moral dessert thing is directional but far from absolute. I believe in some of it myself, though less of it than many normal people seem to.
It might be tricky to interpret the results of this study, because people could be conflating the welfare of the thing with the thing's impact on aggregate welfare.
For example, it's very strange that the welfare of trees and mountains is given relatively high standing, because trees and mountains do not have welfare. However, trees and mountains do affect aggregate welfare.
Similarly, we might say people gave negative consideration to the welfare of murderers and child molesters because they want them to suffer (plausible), or perhaps they thought that the existence of murderers and child molesters has a negative impact on aggregate welfare.
There's an adjacent point in the comments about how people could be conflating the intrinsic importance of a person's welfare with the obligation to care for the person's welfare.
I do want to recognize that people really do value the intrinsic welfare of different groups differently (and even strangely), but some of the results only make sense on this alternate framing.
I agree. I also think this survey, as well as the moral circle expansion concept in general, unhelpfully fail to distinguish between *absolute moral worth* and *relative moral obligation*. Most people believe that animals are *absolutely* less morally valuable than humans, but that they have lesser *relative* obligations to foreigners than to their fellow-citizens, in the same way that they have special obligations to their family and friends.
How come caring more about people we know is "human and not doing so would be neither possible nor desirable", but caring less about outgroup members is not? Ingroup bias is a human universal just like bias for kinship is. I think reducing ingroup bias is more tractable, but I'm not convinced that we should categorize it differently.
Indeed, from what I can see around the globe, caring less about outgroup members is very human. Most tribes' name for themselves means something like 'the people' or 'the real people'.
I think this analysis misses the concept of reciprocity, which lies at the foundation of ethics. I owe more to the children who are tortured to make carpets for export than to the children who are tortured to sweep chimneys in their home countries because we indirectly interact in a way that benefits us both. Positioning yourself as a disinterested distributor of largess makes ethics seem less like an imperative and more like a whimsical hobby to be replaced as fashion changes.
This is a good observation. I do feel more responsible for people with whom I have a connection: economic, supply chain, religious, etc. Even a rather remote connection or affiliation. Especially so if my actions (buying a carpet) has an impact on them positive or negative or both. It can be pretty attenuated, but even that thin strand of connection gives, I think more valence to that person and the relationship. It's not fair in a universal sense, but it's a sense - maybe deriving from care ethics.
It may lie at the center of your ethics, but not everyone's. I don't agree at all that I owe more to the carpet maker than the chimney sweeper. Doing business with someone does not incur a broader obligation for their welfare than you have for anyone else.
Now play the game of aligning ethics with natural selection. You'll get that altruism comes from self interest, and that obligations are encouraged by incentives.
Why should I play that game? Why does natural selection have anything to do with what is right? Why is it not just a source of bias in our efforts to determine what is right?
I'm not interested in how altruistic impulses historically came about. I'm interested in how I morally ought to behave, and I don't think we can derive that ought from an is.
You can decide for yourself what you think is right, but once you try to do ethics communally you need some common ground for deliberation. What could be more common than human biology? What do you and your interlocutors use instead?
I'm just going to go post-post and disagree with each and every one of them, am i?
i will be surprised if most EAs are actually behave that way. the way i model it, EA are people who have in their moral coalition element that value all people. but i expect them to care about various communities too. the SciFi community, their country, their hobby community.
there are a lot of people that believe you should FIRSt take care of the closer circles, and only after there are no more poor (or undeserving poor) in your country, help others. while EA have budget for everyone and different budget for local charity.
for example, Maximum Impact Israel is EA. it's seem ridiculus to me say that it's not: https://maximpact.org.il/
i sort of feel the same way about the six circle. there is a function-of-caring that assign every person positive wight, including Hitler. but i both have function of incentives and TDT, that gibe Hitler negative value. and sure, i sum them and the sum may get negative sometime, but the first function still exist.
and it very Value Differences As Differently Crystallized Metaphysical Heuristics thing, in my opinion. i just... don't see that important difference between having an impulse to punish and calculate you should? but, this is longer and more complicated issue, and this is already long comment. so...
in my model. EAs are people who have EA element in their moral coalition, when most people don't. they are not that element. they just actively have it.
I expect MOST EAs to care about communities that you declare "fake" (in a obviously wrong way - if community have mutual aid, it's look pretty real to me). i expect them to donate or to let sleep on the couch to people from their community, even of they are stranger. i'm pretty sure i already encounter descriptions of that in EA forums.
i find the claim they don't weird and surprising. like, it look obviously wrong to me, so you must have see it too. maybe i misunderstood? do you predict that most EAs will not help a member of their hobbyist community they don't know?
I agree. But I do think that an EA is going to have a very different idea of what communities should be treated differently, at least if you go by their revealed preferences. I predict that an EA would generally show more concern to a fellow EA, what but they show far less difference in how they treat a fellow American and say a Mexican. But I definitely think that there is a substantial difference here between how normal people behave and how most EAs behave.
Similarly, I agree that most people don’t actually have clear-cut values and beliefs instead of simply having certain patterns of behaviour they tend to follow. Certainly, I don’t think most people would consider it a meaningful difference. Whether you want to punish someone for revenge or because you want to make an example of them for the purpose of deterrence, and I expect that even if there was such a difference, both types of people would answer a survey similarly. Also for all that people claim and think that they would not seek retribution. If it had no benefits I do think tis would be less common if situations where these two values were in conflict weren’t so uncommon. There is also I think a smaller difference in Outlook than might appear on first glance because at the end of the day figures like Hitler haven’t directly affected most EA. It’s easy for your commitment to a moral philosophy or a subconscious desire to signal your adherence to the philosophy to override the mild preference that most people have in favour of Hitler suffering, but when it comes to situations where ordinary people have a strong preference for suffering, for example, people who have severely wronged them and their family I expect an EA would behave like anybody else. If somebody murdered an EA’s family I think they would want revenge and my probability of this is only very slightly lower than it would be for a normal person and honestly higher than it would be for many normal people in favour of compassion for criminals. But I do expect that having a well developed moral outlook that generally does not view retribution as inherently good does effect decision making in the real world, especially when there are no counteracting forces at play like in the Hitler situation.
if i comparing EAs to my Fantasy and SciFi community, i don't expect EAs to prefer locals less. the trend toward choice-based communities in certain circles is not EA-specific. so it's difference beween some Western-middle class-liberals and most people, not difference between EAs and most people.
"I expect an EA would behave like anybody else." - i actually don't. we all have impulses we disendorse, and self control and ability to not act on those impulses. i am, nevertheless, skeptical of the claim that EAs doesn't have those impulses.
why your probability on EAs revenge higher then some other people?
My probability of revenge from an EA against a criminal, who hurt their family in a serious way is higher than it would be for someone who happens to endorse being compassionate towards criminals because most people who are in support of compassion for criminals, have a moral intuition that it is wrong to be brutal towards criminals. But I think for some, but not all EAs they don’t actually have a moral intuition in favour of being compassionate towards criminals. Instead, they have an intuition in favour of utilitarianism and just happened to have noticed the fact that not taking revenge for its own sake is an obvious implication of their preferred moral philosophy. In general, I expect that a human ignoring tendency that nearly all humans have is more likely if they directly have emotions that go against this tendency rather than simply having emotions in favour of a moral philosophy, which happens to dictate this outcome in this particular case as one of its many implications, especially when it is not that difficult to modify utilitarianism to a slightly different moral philosophy that would permit revenge. I could of course, be wrong about this, and I’m not super confident about any of this, but this was my underlying reasoning.
i don't have tight enough model of EA, i mostly have a feeling that different EAs are different, and no good guess of how many belong to what group.
for example there are animal-EAs that are influenced by general animal rights movent, and feel to me somewhere in between the general EA cluster and general Animal Rights cluster. i general don't think about the AI-related EA group, but this group look to me more Timeless Decision Theory then utilitarian, and very much Taking Ideas Seriously. they will enact revenge because this is TDT right thing to do, or not if not, but i don't think they will decide because of emotions.
and i think we use somewhat different farming? i would have not said Ozy have intuition that harming criminals is wring, just that hurting people is wrong. this is general intuition that most people have. but most people have, in addition. intuition about tit-for-tat and that cooperating to someone who constantly defecting is bad.
if i try to translate to my framework, there are people who have the intuition that revenge is right, but they contextualize it as bad impulse, and don't count it as moral impulse but as anti-moral.
or maybe. people have the impulse to revenge, and the meta-knowledge it tend to go wrong. and commitment to do not-that.
i think maybe i in general tend to estimate people's ability to act as they decide and not as they feel more then you?
I remained with the feeling you have some model that different then mine but i don't fully understand it.
Also, I don’t actually think that most people think that hurting people is wrong. Most people agree that hurting innocent people is wrong, but many things that hurting murderers and other bad people is actively good or at least that hurting them within certain limitations is good. Partially that’s because people think it’s an effective punishment and partially because they want revenge, but honestly, as I think you agree, ordinary people don’t think in those terms and regard that as a bizarre technical distinction, which they don’t care about.
when i was a child, my brother sometimes play on the computer, that was in the salon. and he did explain or said WOW or did something else that signal to me he want me to ask him what interesting thing he saw. it was distracting, so i asked him to stop. he said that he didn't do it at me, he just do it, and not because he seek my reaction. and continue. so i went to my room. the acoustic sucks, so i can hear very well what happen in the salon. but when wasn't there, suddenly. my brother was silent.
it didn't feel to him, from the inside, that he was seeking my reaction. but he was. there is a difference between what algorithm one follow and how this algorithm feel from inside.
I'm looking on the algorithm, and you can say my algorithm is wrong. or not cleaving reality at it joints. but saying that people don't think in those terms is such a non sequitur, that i feel this explanation may help to explain what I'm trying to do. and it is to model how people actually behave, not what stories they are telling themselves.
in the same way, i said that i model 2 components to morality. general care component, when hurting people is wring. and general deserve component, when people should get reward or punishment for their did. and you just... stated again you model? i really confused on what you are trying to say here.
like, are you stating there is only "Deserve" component and not "care" one? that would be weird theory, but interesting one. and so EA different from other people in that they have both deserve and care component, but general population have only deserve. except, Christianity is supporting care-morality, and here for quite some time.
are you opposing the separation to two components for some other reason?
I agree that you seem to think people have more ability to ignore their emotions and act on their best judgement, while I expect more weakness of will, but honestly in the situation, where say somebody murders an EA’s family. I expect they would simply change their opinion on the moral permissibility of revenge for its own sake or change their opinion on how effective brutal punishments would be. You seem to think people have more impartiality as well as more strength of will. I think you overestimate how important the subdivisions in the community are since in fact, there is a lot of intermixing and also people being influenced by the ideas of other people in the community. But I am mostly going off my experience on blogs and the EA forum. If you have interacted with people in person, you might be better informed.
I am not entirely sure whether I have nailed down where we disagree, but my impression is that mostly you expect people to be more capable of ignoring their emotions and judging things from a rational and impartial perspective without being overly influenced by what they would like to conclude, and also think that people have more ability to act on their best judgement, even when they really really don’t want to. feel free to correct me if I have mistaken the source of our dis agreement.
i think we have somewhat different model of "best judgment". i don't want to go into multi-agent theory mind and stuff like that. but on basic level, i see myself as attempt to fulfill maximum amount of my desires and urges and needs. they are contradicting each other, and messy, and can't plan. so i am the one who doing that. but i'm not doing the thing when i really really want something but i logically decided that it's wrong so i virtuously avoid it. there is no morality beside CEV, what i'm trying to do is fulfill my desires to the fullest. if i set myself against myself, then i did something deeply wrong, in my opinion.
so when i try to avoid doing something stupid because of a judgment lapse, that is because if i will do that, i will tomorrow wish i didn't. but if i still want that after three months? well, i judged my desires wrong. i will try to fix it now.
so, impartiality and power of will are just... not something people need to have, long term, in my model. short term? sure. but it's "talk to my friend to avoid shouting on my landlady, because tomorrow i will not want to shout at her" sort of thing (real story!).
i don't have good information, and especially not on the same people - i live in Israel, if i meet EAs, those are not the same EAs in USA! all my impressions here are very weakly held, because i have so little information.
also, another possible generator of differences - i may assume that pople was tested by fire more then you. like, everyone went to the army or know someone who does, everyone buried someone. and there are people who want revenge and there are people who don't.
people who don't? it's not because of impartiality or strength of will. i can link some (not-EA) examples, if you can read Hebrew, though i will be surprised if you do.
but all this - when shit hit the pan they will want revenge", well, where i live. shit already hit the pan. so i probably have... generally different model of how people react to this sort of things. and i really don't know how big the difference between populations are.
but it's not "ignoring emotions", it's "having different emotions, that come from different interpretation of reality". and here i am, trying to avoid politics. so i will not try to explain out local debate about war vs freeing of the abducted. but the difference doesn't come from one side being more rational and impartial, but from one side feeling in their guts that the consequence of a is b, and the other side not.
actually, now that i think about that... how people who never even was close to live-and-death situation can know how will they react? i calibrate my model on people who lost their children and then called for revenge or peace or settlements or war. and they are not EA, but they are people who claim they against revenge and continue when someone they close too was murdered.
i think much of my expectations of EA come from there. or, well, my general model of how people that claim they against revenge react to having someone close to them murdered, or thing in this class, come from looking on how people actually reacted. as it happened. and EAs are just instance of this class. but... how do you that in USA? on what you calibrate? and do we even react in the same way, or do my extrapolation is wrong?
(i think the thing i gained most from my recent discussions in this blog is deep appreciation to how different different places are. maybe we are all WEIRD, but wow, this facade of similarity sure conceal vastly different cultures and assumptions!)
are you talking about ziz? i have no idea who else it could be, but it make no sense to bring her - she is pro revenge.
so when we talk about whether EAs believe in value X somewhat more then the general population, bringing the one person who is outlier the other direction look weirdly irrelevant, eve if you see ziz as relevantly EA - and she is not, under the first two definitions i thought about.
i have no problem with "gauche", whatever this even mean. i have problem with unsound arguments.
Stuff like Ziz is ironically one of the big reasons I never got deeper into EA. (The other is not having a lot of people into it around me.) Seems like I'd adopt a 'weird' ideology and not even be a better person by most metrics--EA and rationalism make cults just like every other belief system, it seems.
i find all this way of thinking really strange. what do you even mean by "got deeper"?
i believe things because to the best of my knowledge they are true. i do things because i want to, because to the best of my estimates i will get better results, by my values, by doing so.
i can't choose what true or not (and unwilling to compromise on believing things that are not true).
i can choose just not do things that i believe are net-negative. but i would not call it avoid to go deep.
so what even you mean? what someone say something like that i feel like we talking different languages.
You said you wouldn't expect EAs to behave like anybody else if they were personally affected, because you think EAs have better self-control and ability to not act on impulses. I think it's good to notice that empirically EAs¹ can totally use REA memes like game theory to rationalize those impulses anyway.
what? no, this is not what i said! i said that *people*, generally, have self control. so i expect that people who disendorse some shard of desire will act on it less then people who have no opinion on it, and those will act on it less then people who actively endorse this impulse.
so believe that revenge is bad -> do less revenge.
also, by Ozy's model (but not mine) believe revenge is bad -> EA.
Minor: Clarification on "effective altruists tend not to care about marginalized people more than other people." I wasn't initially sure if you meant "EAs don't care about marginalized people more than other people care about marginalized people" or "EAs don't care about marginalized people more than they care about other people."
My inverted circle varies in size with the time and whether I've had coffee and/or breakfast and often includes much of humanity, which is probably why I'm not an effective altruist.
Moral circle expansionism is bullshit. You should care about not causing or allowing suffering for all creatures by default, and Then constrain your concern to the scope where you have the potential to act effectively.
I think that if someone knows that certain people share (some of) his important values, then he might not regard them as complete strangers, even if he never met them personally. I can even imagine him having certain positive feelings toward them that he does not have toward his family members who don’t share his important values.
You don't have control over what you care about. It's all automatic and unconscious. It's just a delusion to think you can reason yourself into changing what you care about.
Is this an aspirational essay? I ask because many of your statements aren't true.
EA's show preference for one group of strangers over another all the time by the sheer physical fact that they are more familiar with one over another. For example everyone knows about the category 'starving Africans' but 'starving Mongolians' is somewhat obscure. Certainly one group gets more donations than the other.
Secondly:
"For example, punishing bad behavior can keep people from behaving badly in the future, and sometimes you need to lock people away where they can’t hurt anyone"
That's reason enough to suppress a population.
Nonhuman example: You suppress a population of mosquitoes because they spread disease in the future
Human example: You suppress a population of bandits because they predate on traveling caravans in the future
The flaw woven throughout the points of the essay is that you don't know enough about the people in outer circles compared to the outsized effect your aid would or would not have on them. Draining a pond to kill mosquitoes has both positive and negative effects that are easy to foresee if it's your personal pond but impossible if it's a pond on an excel sheet you got from a third circle category. Likewise giving aid to Sunni vs Shiite in Iraq is going to have outsized consequences to other people which people in, say America, are ill equipped to know or understand.
Mongolia is a middle-income country with a fairly robust welfare state. The reason you don't hear about "starving Mongolians" is because there aren't that many starving Mongolians.
If you look at a map of the world coded by GDP (PPP) per capita, you'll see that quite a lot of the world is doing okay. There are only a few countries in desperate straits, and most of them are in sub-Saharan Africa.
I count mussels and oysters as meat-plants, but not clams. Clams are capable of motility! My reasoning for this as the relevant boundary here is the educated guess that 'pain is evolutionarily expensive'. If an animal can't move, it probably doesn't have a sophisticated pain response. If an animal *can* move then I'm inclined to give it the benefit of the doubt. Still, their neurons number in the thousands so this isn't exactly an issue of serious concern to me, there's only so much pain that can be going on in a system that simple.
Sentience is more about movement than sensation. Plants have a similar avoid/approach "experience" as the lowest beings with a movable body, but no ability to act to change their situation.
Bacteria are capable of motility; would you assign them more weight than a mussel or a tree?
Less than a mussel (surely if nothing else, bacteria would also live within and on mussels), unsure about trees and not very interested in the question. I think on the 'pain is evolutionarily costly' theory it probably would be very costly for something as small as a bacterium and it could probably accomplish something similar without pain? Like, motility is necessary but not sufficient for establishing that a creature can feel pain in my model
I think the caring about the individual vs. groups difference is a pretty fundamental one.
Many political causes are not saying "I care more about people in this category", but rather "I care about this category as its own object". This is why you get people so concerned about "the great replacement" and genocide, while much less concerned about individual murder of larger numbers of people in larger demographic groups.
And the same applies to non-humans. A typical environmentalist finds the death and suffering of millions of pigeons basically irrelevant, but the extinction of pigeons would be a big deal. Thus we get these laws about how companies must protect the existence of *species*, but are free to harm as many *animals* as they want.
A big part of effective altruists' "weirdness" comes from not caring about preserving groups at all, and they focus only on the largest numbers of sufferers. These naturally tend to come from the most common species, which normal people care the *least* about.
Huh, do EA people not care about groups? So no endangered species causes?
There's also the whole aspect that group-thinking is pretty normal (which doesn't mean it's right necessarily). We evolved to get into groups and fight wars with other groups (just look around the world), so the effective altruist perspective (which tends to be the extreme form of the universalist 'there is no outgroup') is really weird to most people.
Progressing beyond naturalistic ways is the whole process of civilization. Special interests groups are the main thing keeping society uncivilised.
I'm not sure there's evidence for a difference between EA and non-EA people when it comes to marginalised groups here. (I get that you are unsure about this point, but I want to push back anyway). Note that the survey was asking participants to rank their "obligation" to show moral concern for each group, from 1 (absolutely no obligation) to 9 (very strong obligation).
When people say they feel a stronger obligation to care for a mentally challenged person than for a random citizen of their country, I assume they're reasoning is something like "mentally challenged people have difficulty taking care of themselves, therefore there is a higher obligation for other people to step in and compensate for this". I suppose some people might see mentally challenged people as more "morally pure" or whatever, but for most it seems to be more about treating people as equally worthy, and trying to lift them up to the same level.
Similarly when people are (very slightly) preferring homosexuals to random citizens, I assume this is also coming from a place of "these people are discriminated against, so we have a slightly higher obligation towards them in order to compensate for this".
EA's absolutely do feel more obligated towards people in the third world compared to the first: it's reflected in their donations.
Yes, the reason for the strong mutual hostility between EAs and "the Woke" (SJs thereafter) isn't due to differences in evaluating circles of concern (both are ostensibly universalist) but how this concern is expressed. EAs tend to view maximizing welfare as the sole moral concern, which SJs view as paternalistic or even neo-colonialist. SJs tend to view emancipation of marginalized people from structural oppression as the main concern, which EAs view as an inefficient use of resources based on qualitative sociology rather than objective neoclassical economics.
I think we're kind of rationalizing after the fact. (This is, after all, an offshoot of rationalism. ;) )
A lot of SJ rhetoric and ideology relies on *seeing your own group as the outgroup* (at least if white/male/etc) in order to correct for historic injustices. There's also a lot of using prior Christian framing about sin and punishment and repentance. EAs don't do that, from what I can tell.
This has the bizarre presupposition that social justice activists aren't generally themselves women, black, queer, etc.
Hmm? No, everyone has some privileges, and it's about seeing your privileges, whatever they are, as the outgroup.
Queer women in social justice, for instance, are almost all white, cis, well-educated, citizens so they focus on seeing those groups as the outgroup.
This frankly just feel like talking to someone from an alternate reality based on half-remembered encounters from, like, Occupy or something.
Which part isn't accurate in your view
I think I qualify as a "normie" EA. I identify with the movement, donate to EA charities, read some blogs, but I'm not otherwise involved in the subculture at all.
Your first few EA posts seemed dead-on to me, but this one doesn't feel like it's describing a belief system that I ascribe to, or that I've gotten the impression is the strong consensus belief of the community. I think moral circle expansionism is certainly a core EA belief, and your final point, if a stranger is dying in horrible pain, we should do something to help them also rings very true.
You mentioned at the beginning that the goal was an anthropological description and not a manifesto. Would three specific moral circles, a complete rejection of moral desert and a complete rejection of consideration by species be a belief that the vast majority of EAs would endorse? I think certainly as a movement it holds to those concepts much less strongly than the general public, but I'm not sure most reject them completely or even hold a consistent philosophical opinion there, unlike, say consequentialism which seems integral to the movement.
Or it could just be that I've gotten the wrong impression, given it's not something front and center in the marketing.
I've been involved in the subculture more deeply than you, and you're right that it would overstate to call the absolutist version of this position a strong consensus. I think many EAs care more about stranger humans than they do about animals, even accounting for the size of the welfare buckets, in part due to moral uncertainty and to self-awareness about the especially strong weirdness of caring about wild insects suffering, etc. They may have a 4th circle for animals that is *still way closer and less discriminatory among animals* than that of normal people, but distinct nonetheless. Likewise, I think the moral dessert thing is directional but far from absolute. I believe in some of it myself, though less of it than many normal people seem to.
It might be tricky to interpret the results of this study, because people could be conflating the welfare of the thing with the thing's impact on aggregate welfare.
For example, it's very strange that the welfare of trees and mountains is given relatively high standing, because trees and mountains do not have welfare. However, trees and mountains do affect aggregate welfare.
Similarly, we might say people gave negative consideration to the welfare of murderers and child molesters because they want them to suffer (plausible), or perhaps they thought that the existence of murderers and child molesters has a negative impact on aggregate welfare.
There's an adjacent point in the comments about how people could be conflating the intrinsic importance of a person's welfare with the obligation to care for the person's welfare.
I do want to recognize that people really do value the intrinsic welfare of different groups differently (and even strangely), but some of the results only make sense on this alternate framing.
I agree. I also think this survey, as well as the moral circle expansion concept in general, unhelpfully fail to distinguish between *absolute moral worth* and *relative moral obligation*. Most people believe that animals are *absolutely* less morally valuable than humans, but that they have lesser *relative* obligations to foreigners than to their fellow-citizens, in the same way that they have special obligations to their family and friends.
How come caring more about people we know is "human and not doing so would be neither possible nor desirable", but caring less about outgroup members is not? Ingroup bias is a human universal just like bias for kinship is. I think reducing ingroup bias is more tractable, but I'm not convinced that we should categorize it differently.
Indeed, from what I can see around the globe, caring less about outgroup members is very human. Most tribes' name for themselves means something like 'the people' or 'the real people'.
I think this analysis misses the concept of reciprocity, which lies at the foundation of ethics. I owe more to the children who are tortured to make carpets for export than to the children who are tortured to sweep chimneys in their home countries because we indirectly interact in a way that benefits us both. Positioning yourself as a disinterested distributor of largess makes ethics seem less like an imperative and more like a whimsical hobby to be replaced as fashion changes.
This is a good observation. I do feel more responsible for people with whom I have a connection: economic, supply chain, religious, etc. Even a rather remote connection or affiliation. Especially so if my actions (buying a carpet) has an impact on them positive or negative or both. It can be pretty attenuated, but even that thin strand of connection gives, I think more valence to that person and the relationship. It's not fair in a universal sense, but it's a sense - maybe deriving from care ethics.
It may lie at the center of your ethics, but not everyone's. I don't agree at all that I owe more to the carpet maker than the chimney sweeper. Doing business with someone does not incur a broader obligation for their welfare than you have for anyone else.
But it does correlate their well being with your self interest.
Very loosely, sometimes. But then, that's not altruism, is it? And not about obligations so much as incentives.
Now play the game of aligning ethics with natural selection. You'll get that altruism comes from self interest, and that obligations are encouraged by incentives.
Why should I play that game? Why does natural selection have anything to do with what is right? Why is it not just a source of bias in our efforts to determine what is right?
I'm not interested in how altruistic impulses historically came about. I'm interested in how I morally ought to behave, and I don't think we can derive that ought from an is.
You can decide for yourself what you think is right, but once you try to do ethics communally you need some common ground for deliberation. What could be more common than human biology? What do you and your interlocutors use instead?
I'm just going to go post-post and disagree with each and every one of them, am i?
i will be surprised if most EAs are actually behave that way. the way i model it, EA are people who have in their moral coalition element that value all people. but i expect them to care about various communities too. the SciFi community, their country, their hobby community.
there are a lot of people that believe you should FIRSt take care of the closer circles, and only after there are no more poor (or undeserving poor) in your country, help others. while EA have budget for everyone and different budget for local charity.
for example, Maximum Impact Israel is EA. it's seem ridiculus to me say that it's not: https://maximpact.org.il/
it's the place when you can donate to GiveWell in israel and get it count for taxes. it's also have both worldwide recommended Charities: https://maximpact.org.il/effective-global-orgs/
and local ones.
i sort of feel the same way about the six circle. there is a function-of-caring that assign every person positive wight, including Hitler. but i both have function of incentives and TDT, that gibe Hitler negative value. and sure, i sum them and the sum may get negative sometime, but the first function still exist.
https://slatestarcodex.com/2018/07/24/value-differences-as-differently-crystallized-metaphysical-heuristics/
and it very Value Differences As Differently Crystallized Metaphysical Heuristics thing, in my opinion. i just... don't see that important difference between having an impulse to punish and calculate you should? but, this is longer and more complicated issue, and this is already long comment. so...
in my model. EAs are people who have EA element in their moral coalition, when most people don't. they are not that element. they just actively have it.
I expect MOST EAs to care about communities that you declare "fake" (in a obviously wrong way - if community have mutual aid, it's look pretty real to me). i expect them to donate or to let sleep on the couch to people from their community, even of they are stranger. i'm pretty sure i already encounter descriptions of that in EA forums.
i find the claim they don't weird and surprising. like, it look obviously wrong to me, so you must have see it too. maybe i misunderstood? do you predict that most EAs will not help a member of their hobbyist community they don't know?
I agree. But I do think that an EA is going to have a very different idea of what communities should be treated differently, at least if you go by their revealed preferences. I predict that an EA would generally show more concern to a fellow EA, what but they show far less difference in how they treat a fellow American and say a Mexican. But I definitely think that there is a substantial difference here between how normal people behave and how most EAs behave.
Similarly, I agree that most people don’t actually have clear-cut values and beliefs instead of simply having certain patterns of behaviour they tend to follow. Certainly, I don’t think most people would consider it a meaningful difference. Whether you want to punish someone for revenge or because you want to make an example of them for the purpose of deterrence, and I expect that even if there was such a difference, both types of people would answer a survey similarly. Also for all that people claim and think that they would not seek retribution. If it had no benefits I do think tis would be less common if situations where these two values were in conflict weren’t so uncommon. There is also I think a smaller difference in Outlook than might appear on first glance because at the end of the day figures like Hitler haven’t directly affected most EA. It’s easy for your commitment to a moral philosophy or a subconscious desire to signal your adherence to the philosophy to override the mild preference that most people have in favour of Hitler suffering, but when it comes to situations where ordinary people have a strong preference for suffering, for example, people who have severely wronged them and their family I expect an EA would behave like anybody else. If somebody murdered an EA’s family I think they would want revenge and my probability of this is only very slightly lower than it would be for a normal person and honestly higher than it would be for many normal people in favour of compassion for criminals. But I do expect that having a well developed moral outlook that generally does not view retribution as inherently good does effect decision making in the real world, especially when there are no counteracting forces at play like in the Hitler situation.
if i comparing EAs to my Fantasy and SciFi community, i don't expect EAs to prefer locals less. the trend toward choice-based communities in certain circles is not EA-specific. so it's difference beween some Western-middle class-liberals and most people, not difference between EAs and most people.
"I expect an EA would behave like anybody else." - i actually don't. we all have impulses we disendorse, and self control and ability to not act on those impulses. i am, nevertheless, skeptical of the claim that EAs doesn't have those impulses.
why your probability on EAs revenge higher then some other people?
My probability of revenge from an EA against a criminal, who hurt their family in a serious way is higher than it would be for someone who happens to endorse being compassionate towards criminals because most people who are in support of compassion for criminals, have a moral intuition that it is wrong to be brutal towards criminals. But I think for some, but not all EAs they don’t actually have a moral intuition in favour of being compassionate towards criminals. Instead, they have an intuition in favour of utilitarianism and just happened to have noticed the fact that not taking revenge for its own sake is an obvious implication of their preferred moral philosophy. In general, I expect that a human ignoring tendency that nearly all humans have is more likely if they directly have emotions that go against this tendency rather than simply having emotions in favour of a moral philosophy, which happens to dictate this outcome in this particular case as one of its many implications, especially when it is not that difficult to modify utilitarianism to a slightly different moral philosophy that would permit revenge. I could of course, be wrong about this, and I’m not super confident about any of this, but this was my underlying reasoning.
i don't have tight enough model of EA, i mostly have a feeling that different EAs are different, and no good guess of how many belong to what group.
for example there are animal-EAs that are influenced by general animal rights movent, and feel to me somewhere in between the general EA cluster and general Animal Rights cluster. i general don't think about the AI-related EA group, but this group look to me more Timeless Decision Theory then utilitarian, and very much Taking Ideas Seriously. they will enact revenge because this is TDT right thing to do, or not if not, but i don't think they will decide because of emotions.
and i think we use somewhat different farming? i would have not said Ozy have intuition that harming criminals is wring, just that hurting people is wrong. this is general intuition that most people have. but most people have, in addition. intuition about tit-for-tat and that cooperating to someone who constantly defecting is bad.
if i try to translate to my framework, there are people who have the intuition that revenge is right, but they contextualize it as bad impulse, and don't count it as moral impulse but as anti-moral.
or maybe. people have the impulse to revenge, and the meta-knowledge it tend to go wrong. and commitment to do not-that.
i think maybe i in general tend to estimate people's ability to act as they decide and not as they feel more then you?
I remained with the feeling you have some model that different then mine but i don't fully understand it.
Also, I don’t actually think that most people think that hurting people is wrong. Most people agree that hurting innocent people is wrong, but many things that hurting murderers and other bad people is actively good or at least that hurting them within certain limitations is good. Partially that’s because people think it’s an effective punishment and partially because they want revenge, but honestly, as I think you agree, ordinary people don’t think in those terms and regard that as a bizarre technical distinction, which they don’t care about.
when i was a child, my brother sometimes play on the computer, that was in the salon. and he did explain or said WOW or did something else that signal to me he want me to ask him what interesting thing he saw. it was distracting, so i asked him to stop. he said that he didn't do it at me, he just do it, and not because he seek my reaction. and continue. so i went to my room. the acoustic sucks, so i can hear very well what happen in the salon. but when wasn't there, suddenly. my brother was silent.
it didn't feel to him, from the inside, that he was seeking my reaction. but he was. there is a difference between what algorithm one follow and how this algorithm feel from inside.
I'm looking on the algorithm, and you can say my algorithm is wrong. or not cleaving reality at it joints. but saying that people don't think in those terms is such a non sequitur, that i feel this explanation may help to explain what I'm trying to do. and it is to model how people actually behave, not what stories they are telling themselves.
in the same way, i said that i model 2 components to morality. general care component, when hurting people is wring. and general deserve component, when people should get reward or punishment for their did. and you just... stated again you model? i really confused on what you are trying to say here.
like, are you stating there is only "Deserve" component and not "care" one? that would be weird theory, but interesting one. and so EA different from other people in that they have both deserve and care component, but general population have only deserve. except, Christianity is supporting care-morality, and here for quite some time.
are you opposing the separation to two components for some other reason?
I agree that you seem to think people have more ability to ignore their emotions and act on their best judgement, while I expect more weakness of will, but honestly in the situation, where say somebody murders an EA’s family. I expect they would simply change their opinion on the moral permissibility of revenge for its own sake or change their opinion on how effective brutal punishments would be. You seem to think people have more impartiality as well as more strength of will. I think you overestimate how important the subdivisions in the community are since in fact, there is a lot of intermixing and also people being influenced by the ideas of other people in the community. But I am mostly going off my experience on blogs and the EA forum. If you have interacted with people in person, you might be better informed.
I am not entirely sure whether I have nailed down where we disagree, but my impression is that mostly you expect people to be more capable of ignoring their emotions and judging things from a rational and impartial perspective without being overly influenced by what they would like to conclude, and also think that people have more ability to act on their best judgement, even when they really really don’t want to. feel free to correct me if I have mistaken the source of our dis agreement.
i think we have somewhat different model of "best judgment". i don't want to go into multi-agent theory mind and stuff like that. but on basic level, i see myself as attempt to fulfill maximum amount of my desires and urges and needs. they are contradicting each other, and messy, and can't plan. so i am the one who doing that. but i'm not doing the thing when i really really want something but i logically decided that it's wrong so i virtuously avoid it. there is no morality beside CEV, what i'm trying to do is fulfill my desires to the fullest. if i set myself against myself, then i did something deeply wrong, in my opinion.
so when i try to avoid doing something stupid because of a judgment lapse, that is because if i will do that, i will tomorrow wish i didn't. but if i still want that after three months? well, i judged my desires wrong. i will try to fix it now.
so, impartiality and power of will are just... not something people need to have, long term, in my model. short term? sure. but it's "talk to my friend to avoid shouting on my landlady, because tomorrow i will not want to shout at her" sort of thing (real story!).
i don't have good information, and especially not on the same people - i live in Israel, if i meet EAs, those are not the same EAs in USA! all my impressions here are very weakly held, because i have so little information.
also, another possible generator of differences - i may assume that pople was tested by fire more then you. like, everyone went to the army or know someone who does, everyone buried someone. and there are people who want revenge and there are people who don't.
people who don't? it's not because of impartiality or strength of will. i can link some (not-EA) examples, if you can read Hebrew, though i will be surprised if you do.
but all this - when shit hit the pan they will want revenge", well, where i live. shit already hit the pan. so i probably have... generally different model of how people react to this sort of things. and i really don't know how big the difference between populations are.
but it's not "ignoring emotions", it's "having different emotions, that come from different interpretation of reality". and here i am, trying to avoid politics. so i will not try to explain out local debate about war vs freeing of the abducted. but the difference doesn't come from one side being more rational and impartial, but from one side feeling in their guts that the consequence of a is b, and the other side not.
actually, now that i think about that... how people who never even was close to live-and-death situation can know how will they react? i calibrate my model on people who lost their children and then called for revenge or peace or settlements or war. and they are not EA, but they are people who claim they against revenge and continue when someone they close too was murdered.
i think much of my expectations of EA come from there. or, well, my general model of how people that claim they against revenge react to having someone close to them murdered, or thing in this class, come from looking on how people actually reacted. as it happened. and EAs are just instance of this class. but... how do you that in USA? on what you calibrate? and do we even react in the same way, or do my extrapolation is wrong?
(i think the thing i gained most from my recent discussions in this blog is deep appreciation to how different different places are. maybe we are all WEIRD, but wow, this facade of similarity sure conceal vastly different cultures and assumptions!)
(It might be a little gauche to bring this up here, but a bunch of EAs have been arrested this year for membership in a revenge-obsessed phyg.)
are you talking about ziz? i have no idea who else it could be, but it make no sense to bring her - she is pro revenge.
so when we talk about whether EAs believe in value X somewhat more then the general population, bringing the one person who is outlier the other direction look weirdly irrelevant, eve if you see ziz as relevantly EA - and she is not, under the first two definitions i thought about.
i have no problem with "gauche", whatever this even mean. i have problem with unsound arguments.
Stuff like Ziz is ironically one of the big reasons I never got deeper into EA. (The other is not having a lot of people into it around me.) Seems like I'd adopt a 'weird' ideology and not even be a better person by most metrics--EA and rationalism make cults just like every other belief system, it seems.
i find all this way of thinking really strange. what do you even mean by "got deeper"?
i believe things because to the best of my knowledge they are true. i do things because i want to, because to the best of my estimates i will get better results, by my values, by doing so.
i can't choose what true or not (and unwilling to compromise on believing things that are not true).
i can choose just not do things that i believe are net-negative. but i would not call it avoid to go deep.
so what even you mean? what someone say something like that i feel like we talking different languages.
You said you wouldn't expect EAs to behave like anybody else if they were personally affected, because you think EAs have better self-control and ability to not act on impulses. I think it's good to notice that empirically EAs¹ can totally use REA memes like game theory to rationalize those impulses anyway.
¹: if you want to nitpick over whether they were EAs: https://archive.is/WQZiC#selection-1490.0-1490.1, https://archive.is/h0WAQ
what? no, this is not what i said! i said that *people*, generally, have self control. so i expect that people who disendorse some shard of desire will act on it less then people who have no opinion on it, and those will act on it less then people who actively endorse this impulse.
so believe that revenge is bad -> do less revenge.
also, by Ozy's model (but not mine) believe revenge is bad -> EA.
Minor: Clarification on "effective altruists tend not to care about marginalized people more than other people." I wasn't initially sure if you meant "EAs don't care about marginalized people more than other people care about marginalized people" or "EAs don't care about marginalized people more than they care about other people."
This is really well done.
My inverted circle varies in size with the time and whether I've had coffee and/or breakfast and often includes much of humanity, which is probably why I'm not an effective altruist.
Moral circle expansionism is bullshit. You should care about not causing or allowing suffering for all creatures by default, and Then constrain your concern to the scope where you have the potential to act effectively.
Excellent. Do you think this is canon among EAs - the thing about strangers being equal regardless of group identification?
I think that if someone knows that certain people share (some of) his important values, then he might not regard them as complete strangers, even if he never met them personally. I can even imagine him having certain positive feelings toward them that he does not have toward his family members who don’t share his important values.
You don't have control over what you care about. It's all automatic and unconscious. It's just a delusion to think you can reason yourself into changing what you care about.
Is this an aspirational essay? I ask because many of your statements aren't true.
EA's show preference for one group of strangers over another all the time by the sheer physical fact that they are more familiar with one over another. For example everyone knows about the category 'starving Africans' but 'starving Mongolians' is somewhat obscure. Certainly one group gets more donations than the other.
Secondly:
"For example, punishing bad behavior can keep people from behaving badly in the future, and sometimes you need to lock people away where they can’t hurt anyone"
That's reason enough to suppress a population.
Nonhuman example: You suppress a population of mosquitoes because they spread disease in the future
Human example: You suppress a population of bandits because they predate on traveling caravans in the future
The flaw woven throughout the points of the essay is that you don't know enough about the people in outer circles compared to the outsized effect your aid would or would not have on them. Draining a pond to kill mosquitoes has both positive and negative effects that are easy to foresee if it's your personal pond but impossible if it's a pond on an excel sheet you got from a third circle category. Likewise giving aid to Sunni vs Shiite in Iraq is going to have outsized consequences to other people which people in, say America, are ill equipped to know or understand.
Mongolia is a middle-income country with a fairly robust welfare state. The reason you don't hear about "starving Mongolians" is because there aren't that many starving Mongolians.
If you look at a map of the world coded by GDP (PPP) per capita, you'll see that quite a lot of the world is doing okay. There are only a few countries in desperate straits, and most of them are in sub-Saharan Africa.