I get the first two. I totally don't understand why effective altruists should regard AI (rather than catastrophic climate change or nuclear holocaust) as the biggest threat to humanity. This linkage has been a disaster, certainly in PR terms.
The usual EA view is that climate change will be really bad, but probably won't be a civilization ender.
As for AI over nuclear war, I think this is a neglectedness thing. Governments seem to care a lot about nuclear war already, and any solution seems likely that it would be political. On the other hand, at least theoretically, someone could solve the AI problem tomorrow if they came up with the right algorithm.
I understand the arguments, but this reasoning has nothing to do with Effective Altruism as a philosophical position. I could advocate or reject it on the basis of almost any philosophical viewpoint
The result of making it part of the EA political program is to cost EA support among people who don't agree with the view on AI, while highlighting the influence of proponents of this view (Bankman-Fried, McAskill etc) who are also widely disliked for other reaons.
I would not be surprised to learn most EA people prefer science fiction to Shakespeare (I do). But it would be stupid to make this a test of EA orthodoxy or to anoint SF writers as representatives.
I don't see how the reasoning has nothing to do with EA philosophy. EA is, at least in part, about taking weird ideas seriously. Being worried about AI is weirder than worry about nuclear war or climate change. That's why it's a neglected thing that's not already being taken care of by governments the way nuclear weapons and climate change are.
I get that you are worried about PR. But there is no real EA orthodoxy. You can believe AI might have bad effects on civilization or not, and put money toward world hunger instead. I understand that there is often a discussion of if EA should split off into animal EA, global poverty EA, catastrophic risk EA, etc. The problem is that we expect to end up with 3 or 4 different conferences with 70% to 90% of the same people at each conference.
EA tends to be fundamentally, the view that says that reality doesn't care if your cause is weird. So it's hard to separate the global catastrophic risk crowd from the solving poverty crowd, because they are often the same people. If we expected that their might be a physics breakthrough that we might discover soon that would stop nuclear weapons, you would expect a lot of EA money to go to that.
I don't think the PR thing is really that bad. EA isn't itself a charity. People don't give money to "Effective Altruism", they give it to GiveWell or Open Philanthropy or whatever. And getting people to identify as EAs is way, way less important than getting them to be more altruistic and to be more effective with their altruism.
Say for the sake of argument I'm trying to convince Alice to give some money to the Against Malaria Foundation. There are a few counterarguments that Alice can make:
"All the money will go to warlords" / "The bed nets will just be used for fishing" / "Foreign aid causes economic damage to local industries" -- arguments that go to the efficacy of the intervention. The conversation gets EA Judo'd.
"I still think it's valuable to support local arts organizations / my church / the homeless shelter." -- An argument that goes to the heart of EA, to which I am very sympathetic. I'd still encourage Alice to dedicate some part of her giving to high-impact charities. If she shifts half her giving from low-impact to high-impact, that's a win whether or not she calls herself an EA.
"I don't think I have a duty to help people in other communities / nations / cultures" -- a more fundamental disagreement. I suppose I could point out ways that Alice can be more effective with her giving within her sphere of concern, although in real life I'd probably mumble some excuse and leave the conversation before we got into a fight.
"You're talking about effective altruism, but aren't there other people who talk about effective altruism who are weird about AI / shrimp / simulated beings in possible futures?" -- This isn't a real argument. It's fallacious on several levels. I could say "it's a big movement, some people focus on weird topics, but the Against Malaria Foundation doesn't have anything to do with AI", but if Alice is making this sort of argument, she's already made up her mind.
For a single organization in the EA space, PR is indeed a concern. But I don't see how some EAs being Weird In Public hurts other, unrelated EAs. Is there a causal pathway I'm missing?
We seem to be in furious agreement on what is desirable, namely that EAs should all believe in directing altruistic efforts where they are most effective, and that they should be free to disagree on all sorts of other issues.
But as I see things, the public perception of EA (and of some supporters, as the poster above suggests) is that being Weird in Public is a core commitment. You may see it differently.
This is quite good. While I wouldn't quibble with 2 (Think with numbers) or 3 (We don’t know what we’re doing—but we can figure it out together), and I would with 4 (You can do hard things) but that's more of a statement of personal inferiority than any real critique of the actual ideology, I would argue that 1(Don’t care about some strangers more than other strangers because of arbitrary group membership) is probably one of the greatest weaknesses of EA as far as the real world goes.
Because dang it, people do, a *lot*, and it's so constant throughout history it's probably biologically based; the only thing that really seems to keep it in check historically at all is religion (and even then only universalizing ones like Christianity or Buddhism, and of course they do the ingroup thing too). A term I found useful is 'universe of obligation'--basically, the people you're supposed to care about. The EA theoretically wants to extend the universe of obligation to everyone human (and in some cases beyond), and most people aren't built that way. Acting like you can turn most people into EAs is much like ignoring the biological reality of sex (which you advocated against elsewehere: https://thingofthings.substack.com/p/notes-towards-a-sex-realist-feminism).
Thank you for laying out the whole thing so clearly.
Yes, it is common in history and the present day for people to care more about strangers with similar cultures or skin-tones than them than other strangers. But just because it's common for people to have this bias, it doesn't mean that bias is justified. This is commonly referred to as the "is-ought" fallacy.
Slavery has been incredibly common throughout human history, to the point where people could believe it was "natural" and "based in biology". But today, it is rightly considered a moral abomination in most of the world. I am glad that early abolitionists did not heed arguments like the one you are making here.
I'm not making the is-ought argument (though I could see how you could think that). I think human tribalism isn't changeable at this point in time, is almost universal, and it will provide an upper limit to the number of people persuadable of EA ideas. That's an is, not an ought; it's not that more EA-convincible people would be *bad* (certainly it would make coordination problems easier), it's that it's not possible at least right now.
"Don’t care about some strangers more than other strangers because of arbitrary group membership."
I can make a case you should. Basically, the EA blind spot might be that OK I save the life of this person, but I do not care about any further steps: will this person be a good person who also goes on saving more lives, or a bad person who will kill people? Obviously you should. You should save the lives of most good people, not most people.
Hence people do not have equal moral value, better save a doctor than a murderer.
Arbitrary group memberships are not that random, they mean closeness. We know more about group members than about others. Hence we can more easily figure it out whether we are saving doctors or murderers.
I get the first two. I totally don't understand why effective altruists should regard AI (rather than catastrophic climate change or nuclear holocaust) as the biggest threat to humanity. This linkage has been a disaster, certainly in PR terms.
The usual EA view is that climate change will be really bad, but probably won't be a civilization ender.
As for AI over nuclear war, I think this is a neglectedness thing. Governments seem to care a lot about nuclear war already, and any solution seems likely that it would be political. On the other hand, at least theoretically, someone could solve the AI problem tomorrow if they came up with the right algorithm.
I understand the arguments, but this reasoning has nothing to do with Effective Altruism as a philosophical position. I could advocate or reject it on the basis of almost any philosophical viewpoint
The result of making it part of the EA political program is to cost EA support among people who don't agree with the view on AI, while highlighting the influence of proponents of this view (Bankman-Fried, McAskill etc) who are also widely disliked for other reaons.
I would not be surprised to learn most EA people prefer science fiction to Shakespeare (I do). But it would be stupid to make this a test of EA orthodoxy or to anoint SF writers as representatives.
I don't see how the reasoning has nothing to do with EA philosophy. EA is, at least in part, about taking weird ideas seriously. Being worried about AI is weirder than worry about nuclear war or climate change. That's why it's a neglected thing that's not already being taken care of by governments the way nuclear weapons and climate change are.
I get that you are worried about PR. But there is no real EA orthodoxy. You can believe AI might have bad effects on civilization or not, and put money toward world hunger instead. I understand that there is often a discussion of if EA should split off into animal EA, global poverty EA, catastrophic risk EA, etc. The problem is that we expect to end up with 3 or 4 different conferences with 70% to 90% of the same people at each conference.
EA tends to be fundamentally, the view that says that reality doesn't care if your cause is weird. So it's hard to separate the global catastrophic risk crowd from the solving poverty crowd, because they are often the same people. If we expected that their might be a physics breakthrough that we might discover soon that would stop nuclear weapons, you would expect a lot of EA money to go to that.
I don't think the PR thing is really that bad. EA isn't itself a charity. People don't give money to "Effective Altruism", they give it to GiveWell or Open Philanthropy or whatever. And getting people to identify as EAs is way, way less important than getting them to be more altruistic and to be more effective with their altruism.
Say for the sake of argument I'm trying to convince Alice to give some money to the Against Malaria Foundation. There are a few counterarguments that Alice can make:
"All the money will go to warlords" / "The bed nets will just be used for fishing" / "Foreign aid causes economic damage to local industries" -- arguments that go to the efficacy of the intervention. The conversation gets EA Judo'd.
"I still think it's valuable to support local arts organizations / my church / the homeless shelter." -- An argument that goes to the heart of EA, to which I am very sympathetic. I'd still encourage Alice to dedicate some part of her giving to high-impact charities. If she shifts half her giving from low-impact to high-impact, that's a win whether or not she calls herself an EA.
"I don't think I have a duty to help people in other communities / nations / cultures" -- a more fundamental disagreement. I suppose I could point out ways that Alice can be more effective with her giving within her sphere of concern, although in real life I'd probably mumble some excuse and leave the conversation before we got into a fight.
"You're talking about effective altruism, but aren't there other people who talk about effective altruism who are weird about AI / shrimp / simulated beings in possible futures?" -- This isn't a real argument. It's fallacious on several levels. I could say "it's a big movement, some people focus on weird topics, but the Against Malaria Foundation doesn't have anything to do with AI", but if Alice is making this sort of argument, she's already made up her mind.
For a single organization in the EA space, PR is indeed a concern. But I don't see how some EAs being Weird In Public hurts other, unrelated EAs. Is there a causal pathway I'm missing?
We seem to be in furious agreement on what is desirable, namely that EAs should all believe in directing altruistic efforts where they are most effective, and that they should be free to disagree on all sorts of other issues.
But as I see things, the public perception of EA (and of some supporters, as the poster above suggests) is that being Weird in Public is a core commitment. You may see it differently.
This is quite good. While I wouldn't quibble with 2 (Think with numbers) or 3 (We don’t know what we’re doing—but we can figure it out together), and I would with 4 (You can do hard things) but that's more of a statement of personal inferiority than any real critique of the actual ideology, I would argue that 1(Don’t care about some strangers more than other strangers because of arbitrary group membership) is probably one of the greatest weaknesses of EA as far as the real world goes.
Because dang it, people do, a *lot*, and it's so constant throughout history it's probably biologically based; the only thing that really seems to keep it in check historically at all is religion (and even then only universalizing ones like Christianity or Buddhism, and of course they do the ingroup thing too). A term I found useful is 'universe of obligation'--basically, the people you're supposed to care about. The EA theoretically wants to extend the universe of obligation to everyone human (and in some cases beyond), and most people aren't built that way. Acting like you can turn most people into EAs is much like ignoring the biological reality of sex (which you advocated against elsewehere: https://thingofthings.substack.com/p/notes-towards-a-sex-realist-feminism).
Thank you for laying out the whole thing so clearly.
Yes, it is common in history and the present day for people to care more about strangers with similar cultures or skin-tones than them than other strangers. But just because it's common for people to have this bias, it doesn't mean that bias is justified. This is commonly referred to as the "is-ought" fallacy.
Slavery has been incredibly common throughout human history, to the point where people could believe it was "natural" and "based in biology". But today, it is rightly considered a moral abomination in most of the world. I am glad that early abolitionists did not heed arguments like the one you are making here.
I'm not making the is-ought argument (though I could see how you could think that). I think human tribalism isn't changeable at this point in time, is almost universal, and it will provide an upper limit to the number of people persuadable of EA ideas. That's an is, not an ought; it's not that more EA-convincible people would be *bad* (certainly it would make coordination problems easier), it's that it's not possible at least right now.
Hi Ozy
"Don’t care about some strangers more than other strangers because of arbitrary group membership."
I can make a case you should. Basically, the EA blind spot might be that OK I save the life of this person, but I do not care about any further steps: will this person be a good person who also goes on saving more lives, or a bad person who will kill people? Obviously you should. You should save the lives of most good people, not most people.
Hence people do not have equal moral value, better save a doctor than a murderer.
Arbitrary group memberships are not that random, they mean closeness. We know more about group members than about others. Hence we can more easily figure it out whether we are saving doctors or murderers.
Appreciate this post.
Honestly, I’ve come to think of EA as a more than anything else.