Kant specifically said that if an axe murderer came to your door wanting to know where your friend was so they could murder them, you should not lie to them. That same logic would call for not lying to the Gestapo when they come to your door looking for Jews.
Deontology may not literally have "follow the government's orders" as one of the rules it sets out, but it still tells you that in a Nazi situation, you should ignore your conscience and follow the rules of deontology instead.
I think most deontological systems would still allow you to refuse to answer. But yeah on the broader point I agree, if I was a Jew in Vichy France I wouldn't trust a deontologist.
Probably the reason why all more than token principled resistance to the Axis in continental Europe came from either the Church (virtue ethicists) or the labor movement (consequentialists) too.
I almost agree, with one disagreement. my understanding of EA is more... directional. you say "Effective altruism involves going all-in on these intuitions everyone shares." and... this is just don't look true - most EAs does not go all-in. they just... go more then the median, that is very not-that.
and that is my sentence on EA. my morality is some CEV that i myself have no idea what it, but i can see the direction clearly. there is level of consequentialism that will be too much for me, but on the current margin, i want to move charity toward welfarism and consequentialism. I also very suspicious of maximization per-se, but I'm much more maximizing then the median.
i think there is tails-come-apart thingy going one now. because there is such not-market inefficiency from WC point of view (or maybe just care-foundation point of view?), a lot of people who want more of that are part of EA. but if we win, if we feed all the hungry and prevent all the torture (and secure the future against treats), then the disagreements will surface.
there is a lot on now-unseen disagreement in the form of value A is 10 more important that value B or the other way around, that is dwarfed by best interventions being orders of magnitudes better.
in a world when people are 10-30% welfarist-conseqencialist, everyone who is more then 50% WC look the same, but there is actually big difference between 50% CW and 90% CW (and i don't believe in 100% CW).
also, i really don't think EA is that much maximizing. it's just... humans are not naturally strategic, the median maximization is, like, 3%, so people who are 20% maximizers look like a lot.
or, if they fail, something like ziz. there is a reason why Ends Don't Justify Means (Among Humans), and it mostly that we don't have utility function, but change by what we do. so pure consequentialism is not algorithm our pure meat-computers capable of running.
I don't actually have tldr. this and Iomedae are the two characters that I actually learned something important from reading glowfics about them. and i don't know how to describe that beside "way to look and interact with the world" or "way of thinking".
no, i can't. i provided a reference, and claim that there is an answer. i don't expect you to read it as an answer (though i do honestly recommend to read it for fun - and to abandon it if you don't enjoy them). i don't know how to summarize it in any way that will not lose 99% of the meaning.
i am aware it's not a useful answer, but it's better then nothing, and more importantly, it's true.
i didn't read it. i didn't planned to, but then i read ridiculus amount of glowfics that spawn from it. my favorite is we know we once were gods: https://www.glowfic.com/board_sections/551
it's also when i recommend to start, as it provide introduction. it's contain some amount of spoliers to ASF2V that i can't judge as i still didn't read it.
i found Iomedae semi-randomly, in glowfic when she was isekaied to ~2,000 years in the past of ASF2V: https://www.glowfic.com/posts/6736
ans then went to read basically all glowfics about her.
i am not expert on ziz, but from what i understand, she want to save the world from unfriendly AI. the chain of logic that lead them to murder people look to me... suspicious at least.
so the place when i stopped reading ziz's blog is when she started clam almost everyone is evil and it fine to use violence against evil people and it just didn't seem very... logical. i probably fail at her ITT, but i just don't see logic.
or, all carnists are evil and it fine be violent toward evil people because ???
it's also when she started with the semi-mysticism. vampires and other undead, hemispheres, prime, soul crubicles. and there just wasn't any logic behind that.
I like the project of trying to profile what effective altruists actually believe, but I think you could take more care in defining which effective altruists you are profiling here.
For example, the effective altruist subreddit, about half of which identify as EA, mostly seems to want to donate to third world charities and feels negative about the "weird" aspects of EA. I suspect most of them would disagree with your characterization of what "EA" is. Or perhaps they'd accept your definitions, and then immediately stop identifying with the label.
As the sort of person who identifies as EA, wants to donate to third world charities, and is highly disinterested in the "weird" aspects of EA, the definition in the post is highly on-point. I just don't care that much about shrimp welfare or think that AI is going to kill us all.
> effective altruists can certainly make the case by mustering any number of disgust-laden anti-Semitic rants, “I was just following orders” excuses for atrocities, and well-paying sinecures given to undeserving nephews.
Whew, this was a bit of a garden-path sentence! I was like 'wait, why are the effective altruists making such rants?'
>Many prominent effective altruists—such as Will MacAskill and Toby Ord—don’t identify as utilitarians.
Yes, but I always got the impression they were "utilitarians with some nuances" or "utilitarians with an appreciation that the full moral theory has not yet been figured out". In practice, mainly utilitarians - much more so than almost everybody.
"The care foundation tends to be welfarist (obviously) but also consequentialist and maximizing. If you care about someone, you want your actions to actually leave them better off. And suffering is bad no matter what, while most people feel okay going “eh, good enough” about the amount of weird sex they’re having. Most people intermingle the care foundation with other non-welfarist, non-consequentialist, non-maximizing foundations. Effective altruists don’t."
huh I actually think purity is more maximizing than care, people who apply purity to weird sex think people shouldn't have ANY weird sex AT ALL
arguably loyalty and authority are also more maximizing than care? though I guess most people have exceptions to those
I don't see how Effective Altruism is consequentialist, unless you mean a form of consequentialism with sharply bounded time horizons. If saving a thousand lives definitely results in a more vicious civil war twenty years later, killing 1m more than would have otherwise died, then I am not sure I would stop the famine now but would instead consider trying to build social institutions that tend to reduce conflict and its intensity with the same money, to try to save those 999000 additional people in the future, even if the intervention is less than certain to achieve its aims. GiveWell's metrics don't seem to focus on events in the murky future, with a lot of weight being assigned to stopping people from dying right now over squishier interventions that are hard to assess without waiting for decades. I am not saying that GiveWell is wrong, and I am not saying Wenar was right, but GiveWell seems to use a bounded horizon (or a high discount rate) for future events.
> If saving a thousand lives definitely results in a more vicious civil war twenty years later
"Definitely" doing a lot of work here. I've been reading a lot of superforecasting-related stuff, and I can't imagine them putting any particular country having a civil war in *20 years time* at higher than the base rate (for that country/region; obviously, it's more likely in Syria than the USA). Certainly hard to imagine them changing it upwards because some twelve year old got covered in an ITN and didn't die of malaria.
The reason the time horizons are bounded is that the only real way to predict that far into the future is just to observe long-term secular trends (e.g. rising GDP per capita) and base rates (e.g. there's a ~5% base rate of pandemics over the past century), neither of which are vulnerable to the impacts of the AMF. Human life is a chaotic system, highly vulnerable to changes in initial conditions. Yeah, maybe the 12yo who didn't die of malaria turns into the next Joseph Kony, or maybe he turns into the next Quett Masire, or, most likely, he just becomes some guy who gets married and has some kids and works a day job. The average additional human's contribution to the species is not negative.
Good point. I'm probably also thinking of longtermist thinking as separate from EA, which is a controversial take. Maybe a better way to express this is that within EA, the specific horizon or discount rate distinguishes different groupings?
I don't see that the main disagreement between longtermism, animal-suffering-reducing, and GiveWell, and those are the three main parts of EA, in my categorization scheme.
i'm very skeptical about longtermism efficiency. in AI, it's look like the AI-concerned part of longtermism made the possible end of the world to come sooner rather then later, and it doesnt look like it got higher chance of survival the coming AI in return, even of you assume that AI end of the world scenario is probable and the most important risk.
wanting to avoid net-negative interventions is very EA, and the disagreement look to me more factual the value based.
the believe in our ability to positivly affect the long term future, maybe? or risk seeking vs risk aversion.
I don't think it's helpful to frame EA as being consequentialist and then define it in the way you do. Really, all EA requires is a much more modest premise--not that helping other people is the primary thing of importance, but that it's at least one important thing you should try to do https://www.goodthoughts.blog/p/beneficentrism. I also think you're somewhat redefining consequentialism--generally in philosophy it's defined as the notion that the consequences of your actions are the sole determinant of their rightness, such that it's always best to take the action with the best consequences.
The point of this project is to describe what effective altruists in general actually believe, not the minimum set of premises that would be required to believe in something like effective altruism.
Gotcha. But still, I think given that the main criticism of EA is that it's utilitariany utilitarianism don't only by utilitarians, I think it's worth getting clear that it's not at all a needed commitment, and the core of EA relies on utterly commonsensical moral premises.
EAs in general are also atheists, but it seems weird to say, e.g. "Effective altruism is a form of atheism."
(Unrelated: just want to say, your blog is incredibly good, and I consistently enjoy your articles--one of my favorite blogs on the internet!)
It's hard to have a society embrace both independence and optimization. How will EA, to the extent it becomes widespread, avoid the constraints associated with, say, everyone buying mosquito netting and no one manufacturing cheap water filters? How do we avoid the tyranny of charity?
Good problem to have! Charity evaluation (e.g. Givewell's) computes "room for more funding", and withdraws its recommendation if the charity can't make good use of more money; this happened with former Givewell #1 charity VillageReach. Huge donors can fund a small charity up to saturation and then stop.
More donations also mean more demand for charity evaluation and allocation baskets that can saturate smaller charities in order, so this can still scale for a while.
At some point, I do start getting worried about the problems of what's basically a planned economy. If 10% of everyone's income was going into the maw of Big Effective Altruism, things would be pretty good and we could just afford a lot of inefficiency, but lots of bright people have ideas for trying to jury-rig markets e.g. with impact certificates that a new and unproven charity could borrow against.
I've always liked the "simplest possible, rudest possible" summary of the three main metaethics:
Deontology: "I was just following orders"
Virtue Ethics: "Intent, in fact, IS magic"
Consequentialism: "The end justifies the means"
Works pretty well at explaining what the three actually are.
I don't think that's a fair description of deontologists, who rarely defend obedience as a rule.
Obedience to the rules, I think.
That's obviously not what "I was just following orders" connotes in Western culture since 1946!
Kant specifically said that if an axe murderer came to your door wanting to know where your friend was so they could murder them, you should not lie to them. That same logic would call for not lying to the Gestapo when they come to your door looking for Jews.
Deontology may not literally have "follow the government's orders" as one of the rules it sets out, but it still tells you that in a Nazi situation, you should ignore your conscience and follow the rules of deontology instead.
I think most deontological systems would still allow you to refuse to answer. But yeah on the broader point I agree, if I was a Jew in Vichy France I wouldn't trust a deontologist.
Probably the reason why all more than token principled resistance to the Axis in continental Europe came from either the Church (virtue ethicists) or the labor movement (consequentialists) too.
I almost agree, with one disagreement. my understanding of EA is more... directional. you say "Effective altruism involves going all-in on these intuitions everyone shares." and... this is just don't look true - most EAs does not go all-in. they just... go more then the median, that is very not-that.
and that is my sentence on EA. my morality is some CEV that i myself have no idea what it, but i can see the direction clearly. there is level of consequentialism that will be too much for me, but on the current margin, i want to move charity toward welfarism and consequentialism. I also very suspicious of maximization per-se, but I'm much more maximizing then the median.
i think there is tails-come-apart thingy going one now. because there is such not-market inefficiency from WC point of view (or maybe just care-foundation point of view?), a lot of people who want more of that are part of EA. but if we win, if we feed all the hungry and prevent all the torture (and secure the future against treats), then the disagreements will surface.
there is a lot on now-unseen disagreement in the form of value A is 10 more important that value B or the other way around, that is dwarfed by best interventions being orders of magnitudes better.
in a world when people are 10-30% welfarist-conseqencialist, everyone who is more then 50% WC look the same, but there is actually big difference between 50% CW and 90% CW (and i don't believe in 100% CW).
also, i really don't think EA is that much maximizing. it's just... humans are not naturally strategic, the median maximization is, like, 3%, so people who are 20% maximizers look like a lot.
What would 100% maximizing look like then?
like Leareth from A Song for Two Voices: https://archiveofourown.org/series/936480
or, if they fail, something like ziz. there is a reason why Ends Don't Justify Means (Among Humans), and it mostly that we don't have utility function, but change by what we do. so pure consequentialism is not algorithm our pure meat-computers capable of running.
https://www.lesswrong.com/s/waF2Pomid7YHjfEDt/p/K9ZaZXDnL3SEmYZqB
What's the tldr on Leareth? That looks very long.
I don't actually have tldr. this and Iomedae are the two characters that I actually learned something important from reading glowfics about them. and i don't know how to describe that beside "way to look and interact with the world" or "way of thinking".
Ok, let's try this again:
Can you describe how Leareth acts that is 100% maximizing?
Or, can you describe how an EA person would act if they started doing it 100%?
no, i can't. i provided a reference, and claim that there is an answer. i don't expect you to read it as an answer (though i do honestly recommend to read it for fun - and to abandon it if you don't enjoy them). i don't know how to summarize it in any way that will not lose 99% of the meaning.
i am aware it's not a useful answer, but it's better then nothing, and more importantly, it's true.
Oh is ASF2V a glowfic?
Link to Iomedae?
no, ASF2V is fanfic: https://archiveofourown.org/series/936480
i didn't read it. i didn't planned to, but then i read ridiculus amount of glowfics that spawn from it. my favorite is we know we once were gods: https://www.glowfic.com/board_sections/551
it's also when i recommend to start, as it provide introduction. it's contain some amount of spoliers to ASF2V that i can't judge as i still didn't read it.
i found Iomedae semi-randomly, in glowfic when she was isekaied to ~2,000 years in the past of ASF2V: https://www.glowfic.com/posts/6736
ans then went to read basically all glowfics about her.
What is the supposed logic behind the zizzies' activities? What are they trying to maximize?
i am not expert on ziz, but from what i understand, she want to save the world from unfriendly AI. the chain of logic that lead them to murder people look to me... suspicious at least.
What was that chain of logic though?
so the place when i stopped reading ziz's blog is when she started clam almost everyone is evil and it fine to use violence against evil people and it just didn't seem very... logical. i probably fail at her ITT, but i just don't see logic.
or, all carnists are evil and it fine be violent toward evil people because ???
it's also when she started with the semi-mysticism. vampires and other undead, hemispheres, prime, soul crubicles. and there just wasn't any logic behind that.
I like the project of trying to profile what effective altruists actually believe, but I think you could take more care in defining which effective altruists you are profiling here.
For example, the effective altruist subreddit, about half of which identify as EA, mostly seems to want to donate to third world charities and feels negative about the "weird" aspects of EA. I suspect most of them would disagree with your characterization of what "EA" is. Or perhaps they'd accept your definitions, and then immediately stop identifying with the label.
https://www.reddit.com/r/EffectiveAltruism/comments/1ii7xm2/who_here_actually_identifies_as_an_effective/
As the sort of person who identifies as EA, wants to donate to third world charities, and is highly disinterested in the "weird" aspects of EA, the definition in the post is highly on-point. I just don't care that much about shrimp welfare or think that AI is going to kill us all.
> effective altruists can certainly make the case by mustering any number of disgust-laden anti-Semitic rants, “I was just following orders” excuses for atrocities, and well-paying sinecures given to undeserving nephews.
Whew, this was a bit of a garden-path sentence! I was like 'wait, why are the effective altruists making such rants?'
Sorry, my love for long sentences can get away from me. >.<
Too many shots at the Manifest conference afterparty.
>Many prominent effective altruists—such as Will MacAskill and Toby Ord—don’t identify as utilitarians.
Yes, but I always got the impression they were "utilitarians with some nuances" or "utilitarians with an appreciation that the full moral theory has not yet been figured out". In practice, mainly utilitarians - much more so than almost everybody.
"The care foundation tends to be welfarist (obviously) but also consequentialist and maximizing. If you care about someone, you want your actions to actually leave them better off. And suffering is bad no matter what, while most people feel okay going “eh, good enough” about the amount of weird sex they’re having. Most people intermingle the care foundation with other non-welfarist, non-consequentialist, non-maximizing foundations. Effective altruists don’t."
huh I actually think purity is more maximizing than care, people who apply purity to weird sex think people shouldn't have ANY weird sex AT ALL
arguably loyalty and authority are also more maximizing than care? though I guess most people have exceptions to those
I don't see how Effective Altruism is consequentialist, unless you mean a form of consequentialism with sharply bounded time horizons. If saving a thousand lives definitely results in a more vicious civil war twenty years later, killing 1m more than would have otherwise died, then I am not sure I would stop the famine now but would instead consider trying to build social institutions that tend to reduce conflict and its intensity with the same money, to try to save those 999000 additional people in the future, even if the intervention is less than certain to achieve its aims. GiveWell's metrics don't seem to focus on events in the murky future, with a lot of weight being assigned to stopping people from dying right now over squishier interventions that are hard to assess without waiting for decades. I am not saying that GiveWell is wrong, and I am not saying Wenar was right, but GiveWell seems to use a bounded horizon (or a high discount rate) for future events.
> If saving a thousand lives definitely results in a more vicious civil war twenty years later
"Definitely" doing a lot of work here. I've been reading a lot of superforecasting-related stuff, and I can't imagine them putting any particular country having a civil war in *20 years time* at higher than the base rate (for that country/region; obviously, it's more likely in Syria than the USA). Certainly hard to imagine them changing it upwards because some twelve year old got covered in an ITN and didn't die of malaria.
The reason the time horizons are bounded is that the only real way to predict that far into the future is just to observe long-term secular trends (e.g. rising GDP per capita) and base rates (e.g. there's a ~5% base rate of pandemics over the past century), neither of which are vulnerable to the impacts of the AMF. Human life is a chaotic system, highly vulnerable to changes in initial conditions. Yeah, maybe the 12yo who didn't die of malaria turns into the next Joseph Kony, or maybe he turns into the next Quett Masire, or, most likely, he just becomes some guy who gets married and has some kids and works a day job. The average additional human's contribution to the species is not negative.
Good point. I'm probably also thinking of longtermist thinking as separate from EA, which is a controversial take. Maybe a better way to express this is that within EA, the specific horizon or discount rate distinguishes different groupings?
I don't see that the main disagreement between longtermism, animal-suffering-reducing, and GiveWell, and those are the three main parts of EA, in my categorization scheme.
i'm very skeptical about longtermism efficiency. in AI, it's look like the AI-concerned part of longtermism made the possible end of the world to come sooner rather then later, and it doesnt look like it got higher chance of survival the coming AI in return, even of you assume that AI end of the world scenario is probable and the most important risk.
wanting to avoid net-negative interventions is very EA, and the disagreement look to me more factual the value based.
the believe in our ability to positivly affect the long term future, maybe? or risk seeking vs risk aversion.
but i just don't see the value difference here.
I don't think it's helpful to frame EA as being consequentialist and then define it in the way you do. Really, all EA requires is a much more modest premise--not that helping other people is the primary thing of importance, but that it's at least one important thing you should try to do https://www.goodthoughts.blog/p/beneficentrism. I also think you're somewhat redefining consequentialism--generally in philosophy it's defined as the notion that the consequences of your actions are the sole determinant of their rightness, such that it's always best to take the action with the best consequences.
The point of this project is to describe what effective altruists in general actually believe, not the minimum set of premises that would be required to believe in something like effective altruism.
Gotcha. But still, I think given that the main criticism of EA is that it's utilitariany utilitarianism don't only by utilitarians, I think it's worth getting clear that it's not at all a needed commitment, and the core of EA relies on utterly commonsensical moral premises.
EAs in general are also atheists, but it seems weird to say, e.g. "Effective altruism is a form of atheism."
(Unrelated: just want to say, your blog is incredibly good, and I consistently enjoy your articles--one of my favorite blogs on the internet!)
I think it's fair to say a core EA tenet is methodological naturalism, making it incompatible with alieving the claims of most organized religions.
It's hard to have a society embrace both independence and optimization. How will EA, to the extent it becomes widespread, avoid the constraints associated with, say, everyone buying mosquito netting and no one manufacturing cheap water filters? How do we avoid the tyranny of charity?
Good problem to have! Charity evaluation (e.g. Givewell's) computes "room for more funding", and withdraws its recommendation if the charity can't make good use of more money; this happened with former Givewell #1 charity VillageReach. Huge donors can fund a small charity up to saturation and then stop.
More donations also mean more demand for charity evaluation and allocation baskets that can saturate smaller charities in order, so this can still scale for a while.
At some point, I do start getting worried about the problems of what's basically a planned economy. If 10% of everyone's income was going into the maw of Big Effective Altruism, things would be pretty good and we could just afford a lot of inefficiency, but lots of bright people have ideas for trying to jury-rig markets e.g. with impact certificates that a new and unproven charity could borrow against.