A chief reason for this is that there is a tendency in recent EA for AI-risk and related worries to take over a disproportional amount of attention, particularly among highly-engaged EAs. Two examples which I think were bad:
1. Germany has one of the largest EA communities globally, partly through the work of the Effektiver Altruismus organization being professionalized early on. A few years ago, the organization quite abruptly dissolved without real replacement because the people involved wanted to work on S-risks. This left a pretty substantive gap in the German EA infrastructure up untill somewhat recently.
2. 80.000, one of the most promising EA organizations, has deprioritized global health & development, as well as animal welfare issues, and now seems to focus almost exclusively on longtermist causes (most prominent of which is, of course, AI risk). This strikes me as quite the loss, as it no longer seems sensible to refer people willing to change careers to the famous "EA switching careers" advisory organization, unless you want them to be nudged strongly towards longtermist causes.
You may think that this switch is fine - maybe longtermism, and AI risk as a specific cause, should really dominate the EA movement, because they rank best on utility calculations. But I think there are significant, unaddressed criticisms of those calculations - see e.g. Thorstadts article here https://globalprioritiesinstitute.org/david-thorstad-three-mistakes-in-the-moral-mathematics-of-existential-risk/, which pushes back on the few examples of explicit calculations favouring existential risk reduction, which he believes rest on deeply implausible assumptions. This is if they are underpinned by calculations at all - Thorstadt had to go to great lengths to even find any concrete numbers being provided for the cost-efficiency of risk reduction being greater than that of other shorttermist interventions. I suspect that there is a good chance that other, less epistemically kosher dynamics (https://forum.effectivealtruism.org/posts/euzDpFvbLqPdwCnXF/university-ea-groups-need-fixing) may drive highly engaged EA's to have significantly above-average rates of buying hard into global risk reduction than more "traditional" and boring areas - some of which you outline here: it is certainly more exciting to be doing research on a fancy new technological risk than to have to engage with the drudgery of modern development economics.
I think your post misunderstands the reasons why people wonder if a part of EA should be split into its own movement, and it did not make me update on it. I am unsure if splitting EA into a future risk and a "all other things" branch would be good. I think it is important that EA works on global catastrophic risk reduction, and have encouraged and helped people enter that space. But I do think there is a real tendency for that specific sector to eat up other sectors in attention, funding, talent and infrastructure, I believe that tendency is stronger than currently justified by the positive arguments for risk reduction, and I believe that trend actively harms the Effective Altruist movement currently.
Yeah, at the risk of being vaguely facetious, a way I could imagine this split is something like "Empirical Altruism" vs "Rational Altruism," in the philosophical "empiricism as sense data vs rationalism as thinking" sense, rather than the "rational means we think about things" sense.
I came into this movement and was excited about it because the premise was "hey, look at play pumps, it turns out just coming up with clever ideas about what helps in the world doesn't work, we need to ground that in data". That's the energy I miss in current EA, which is currently so focused on things that are hypothetical and thus very hard to ground in any kind of empirical way.
I agree that the equivocation between "AI risk organization" and "infrastructure for the entire effective altruism movement" is annoying and pretty unfair to people in other cause areas. (Effective animal advocacy seems to be responding by building out its own infrastructure, which is an interesting approach.) But mostly I was bitching about Scott Alexander's comment section. :P
+1 to this. I think the axis upon which this turns is high vs low certainty, with the worry specifically being that low-certainty cause areas (especially in areas EAs are socially involved in) are ripe for self-dealing even with careful policies and the best of intentions.
High-certainty cause areas are global poverty and (yay!) farmed animal welfare*, where it’s clear both that there is a problem and there are things we can do about it. It’s also easier to tell if you’re working on the problem, versus lining your friends’ pockets.
Low-certainty, at-risk cause areas are AI x-risk and longtermist EA movement building, where paths are murky and what’s good for the cause is probably also good for the EAs inside.
Most other cause areas are mid-to-high-certainty.
I personally think AI x-risk is very worth worrying about, and I’m glad people are working on it. But from a PR perspective, I’m constantly nervous about news stories where EAs lavished money on themselves in the name of movement building**, or where some alignment researcher turned out to be a grifter. The risk here is NOT to global development, which everyone already agrees is good. It’s to other high-certainty—but socially unusual—areas, like animal welfare. Those are the areas at risk of being thrown out with the bathwater.
* Most people agree that animals can feel pain, even if they don’t like thinking about it. Furthermore, the evidence and a priori reasoning case for animals feeling pain is very strong.
** My experience has been that EA orgs are above-and-beyond scrupulous about this. Even the castle, which I remain skeptical of, was at least not as crazy as it sounds. Nonetheless, it’s still a worry.
IME most people who want a split want anti-factory-farming on the normal side with the bednets, not on the weird AI side. I'm one of the people you describe, but I seem to be in a minority.
Okay, so what I want isn't the Effective Altruism movement (except for the company, y'all nerds are great). But the places you suggest are even worse fits.
The movement I want is evangelistic, at least to the extent of being, y'know, a movement, not a social club. I'd like every single person in the world to give 1% of their income, and everyone in rich countries who's not particularly poor to give 10%. I'd like a world where that's seen as basic decency, like holding doors open for people.
It doesn't look like quiet work to direct government spending: Sure, that's part of it, directing foreign aid is great, but private donors save a bunch of lives. Wonkery is necessary — I want charity money to actually help, not be wasted — but most donors aren't wonks, we're just paying the wonks to tell us where to send the cash.
It doesn't look like the religion option: I want a big tent. It's perfectly fine if charitypilled people share no common interests, just like the goal of suffragism was votes for all women, not pleasant conversation between suffragists.
The Effective Altruist movement as it exists is a decent fit for this: GiveWell is moving about half a billion USD per year from private donors to excellent charities. Between EA (big social movement, lots of stuff I disapprove of and find annoying to be associated with) and randomistas (very good, but purely academic), I stick with EA for now.
But I'd love a big social movement whose heroes are, like, Melinda Gates and Saloni Dattani. It's not obvious to me whether this is a harder sell, or an easier sell, than EA as it currently exists.
As someone who’s not from the US or an English speaking country, and was just sent a link to GiveWell online, I always thought this *was* their goal. Only later I discovered it was actually an entire subculture and they all apparently lived in a place in the US called “the Bay Area”.
To be clear: I really like some of the philosophical discussion taking place in EA. There are plenty of cool individuals that I would absolutely love to have a beer with. I fully understand if they really like their little subculture and community.
But the focus on the subculture (and the US-Anglo-centrism) seems very ineffective, if what you actually want is for people everywhere to donate. I can really imagine just seeing EA, and thinking “this is some weird elitist rich people shit” (because yes, to almost everyone almost everywhere, programmers in the US are extremely rich) and dipping out. Sometimes I just want to shake them and urge them to consider to please hire a normie marketing expert so they can appeal to normal people. In their defense, I often think this about leftists and other groups I generally like too, so they’re definitely not the only ones doing this.
On a last note, both “effective altruism” and “rationalism” (especially) are like knives to my ears. Both sound so self righteous and snobbish. I’d like a new name.
I'm not sure about the tradeoff here — if we only consider money raised in the short term, recruiting one billionaire is worth putting off millions of world-average-income people. Beyond that, I'm not sure: elitism has obvious downsides, but bigger reach is likely to distort the goal into generic pro-charity messages, no more effective than the average tin-shaker.
Also, the wonkery itself is some weird elitist rich people shit. I've been into Effective Altruism since before it was called that, and I can tell you, it was a hard sell at the time too. People would reject the idea of running experiments, measuring outcomes, doing cost-benefit analyses as cold-hearted and antithetical to charity. So I'm not sure what your marketing expert can do.
Adding AIs and insect suffering makes it weirder, so possibly a harder sell, but has also given a lot of publicity to the boring bednet stuff — and made the idea of a randomised trial sound outright sedate in comparison. Hard to tell if it's helped or hurt!
(This sounds like it contradicts Ozy's "part of the DNA of the movement", but it doesn't really — there were always overlapping groups doing those different things, and the "Effective Altruism" name officially made those efforts "a movement".)
I find the Anglocentrism tedious, but I don't think it's hurting effectiveness, it's just annoying. Also it's less bad in EA than in most majority-American Internet bubbles.
It makes complete sense for effective altruism to reframe its focus on 1) What we know is effective through evidence and data; and 2) What is altruistic.
I respect longtermism but the evidence that donations there save lives is extremely flimsy and no one knows what’s actually effective. And there’s nothing altruistic about trying not to die.
"And there’s nothing altruistic about trying not to die."
This is only true if you have very high P(doom) and think you can personally help save the world.
If you have a P(doom) of 10% and think if you work on AI or so you can reduce the risk by 0.0000001% that will basically not help you personally at all. But in expectation you would save 800 people or so (and also some larger number of possible future people).
So you can absolutely think anti x risk work is altruistic and not selfish.
You can in theory but clearly no one’s buying it. There’s way more convincing and common sense reasons for someone to work on anti x risk than caring about future people.
The US is the country where individual citizens give the most to charity in the world. In my country (which is rich, western and progressive on many issues), “you should donate to charity, and it should be effective” might very well make a subculture. And as most of us are irreligious - the US is orders of magnitude more religious than other rich western countries - doing so outside a religious context seems more, well, effective.
Genuinely just the entire idea of effective charity, and *actually doing something concrete with my life that would actually concretely help others*, instead of just “spreading awareness” or something, got me into EA more than AI or utilitarianism or consequentialism or anything else. I reckon I’m not the only one.
coincidental to this post I'm reading Norbert Wiener's Cybernetics and he talks about what is basically a social group he was in dedicated to talking about and analyzing scientific papers and it makes me wonder the extent to which "people who do intellectual work but for FUN" is a motivating force in technology and world changing thought.
I feel like you're misunderstanding the claim here. The claim is that effective altruist cause areas other than global poverty are actually very ineffective ways to spend money, and therefore "effective altruism" is just a name with no content at this point.
So for instance, your list of assumptions contains some that don't seem to have a lot to do with effectiveness. Why would it be possible to do math if you aren't reasonably certain of the numbers? Why isn't it reasonable to assume that the world will look basically similar to now in 100 years? Why isn't "same species" morally relevant?
If you replace some of those stranger assumptions for more normie ones and (therefore) focus only on charities that actual statistics say are effective for reducing human suffering right now, you get GiveWell and global-poverty-only-EA.
Most movements have more content than can be summed up in two words. Similarly, "feminism is the radical notion that women are people", but if you're like "women are people, just people who are naturally suited for childrearing instead of having careers" you won't be welcome at the Women's March.
If you don't agree with the general assumptions that tend to be made by effective altruists-- such as anti-speciesism and quantitative reasoning-- then you're not an effective altruist. You can still try to improve the world as best you can using your own assumptions, but it's kind of wild to expect this particular movement to cater to you because you made assumptions based on the name.
I don't think you can argue against splitting the movement, and then tell the would-be splitters that we're not effective altruists. I can see the case for either, but I can't follow contradictory advice!
If you disagree with doing math under conditions of great uncertainty, it’s not just EA you need to take that up with, it’s all of finance, CS, statistics, etc.
If you expect the world to be still basically similar to now in 100 years, that would be enormously different from how it has behaved over the last several hundred years, for no apparent reason.
I would consider someone who believed those two things to be quite strange.
(On the other hand I do think same species is morally relevant.)
If a person 100 years ago was to use EA-style rationalism to conclude what would be the best use of their time, just purely on thinking hard rather than looking at RCT-testable things like insecticidal nets (or even "use your eyeballs things" like "don't elect fascists or communists"), there's a very strong chance they would try to do Fabian socialism and eugenics. Fabianism has a somewhat-mixed legacy while eugenics was just a straight loss. Saying "it's uncertain what the future holds, ergo my thing (based on nothing) is correct" is very silly.
Maybe number 12 is "other x-risks" like asteroid impacts, very very bad solar flares strong enough to destroy all electronics, and "that cordyceps thing makes people zombies in several different books/movies/video games".
I think the framing of this post goes for a strawmanned version of your opponent. As I understand the discourse, it is not about "spin out specifically global health and development" (which, as you point out, would be weird). As I understand it, it's "spin out specifically AI risk" (see e.g. Will MacAskill here https://forum.effectivealtruism.org/posts/euzDpFvbLqPdwCnXF/university-ea-groups-need-fixing?commentId=Bi3cPKt27bF9GNMJf).
A chief reason for this is that there is a tendency in recent EA for AI-risk and related worries to take over a disproportional amount of attention, particularly among highly-engaged EAs. Two examples which I think were bad:
1. Germany has one of the largest EA communities globally, partly through the work of the Effektiver Altruismus organization being professionalized early on. A few years ago, the organization quite abruptly dissolved without real replacement because the people involved wanted to work on S-risks. This left a pretty substantive gap in the German EA infrastructure up untill somewhat recently.
2. 80.000, one of the most promising EA organizations, has deprioritized global health & development, as well as animal welfare issues, and now seems to focus almost exclusively on longtermist causes (most prominent of which is, of course, AI risk). This strikes me as quite the loss, as it no longer seems sensible to refer people willing to change careers to the famous "EA switching careers" advisory organization, unless you want them to be nudged strongly towards longtermist causes.
You may think that this switch is fine - maybe longtermism, and AI risk as a specific cause, should really dominate the EA movement, because they rank best on utility calculations. But I think there are significant, unaddressed criticisms of those calculations - see e.g. Thorstadts article here https://globalprioritiesinstitute.org/david-thorstad-three-mistakes-in-the-moral-mathematics-of-existential-risk/, which pushes back on the few examples of explicit calculations favouring existential risk reduction, which he believes rest on deeply implausible assumptions. This is if they are underpinned by calculations at all - Thorstadt had to go to great lengths to even find any concrete numbers being provided for the cost-efficiency of risk reduction being greater than that of other shorttermist interventions. I suspect that there is a good chance that other, less epistemically kosher dynamics (https://forum.effectivealtruism.org/posts/euzDpFvbLqPdwCnXF/university-ea-groups-need-fixing) may drive highly engaged EA's to have significantly above-average rates of buying hard into global risk reduction than more "traditional" and boring areas - some of which you outline here: it is certainly more exciting to be doing research on a fancy new technological risk than to have to engage with the drudgery of modern development economics.
I think your post misunderstands the reasons why people wonder if a part of EA should be split into its own movement, and it did not make me update on it. I am unsure if splitting EA into a future risk and a "all other things" branch would be good. I think it is important that EA works on global catastrophic risk reduction, and have encouraged and helped people enter that space. But I do think there is a real tendency for that specific sector to eat up other sectors in attention, funding, talent and infrastructure, I believe that tendency is stronger than currently justified by the positive arguments for risk reduction, and I believe that trend actively harms the Effective Altruist movement currently.
Yeah, at the risk of being vaguely facetious, a way I could imagine this split is something like "Empirical Altruism" vs "Rational Altruism," in the philosophical "empiricism as sense data vs rationalism as thinking" sense, rather than the "rational means we think about things" sense.
I came into this movement and was excited about it because the premise was "hey, look at play pumps, it turns out just coming up with clever ideas about what helps in the world doesn't work, we need to ground that in data". That's the energy I miss in current EA, which is currently so focused on things that are hypothetical and thus very hard to ground in any kind of empirical way.
Yeah, where's the RCT showing that AI risk mitigation charities actually do reduce AI risk? ;)
I agree that the equivocation between "AI risk organization" and "infrastructure for the entire effective altruism movement" is annoying and pretty unfair to people in other cause areas. (Effective animal advocacy seems to be responding by building out its own infrastructure, which is an interesting approach.) But mostly I was bitching about Scott Alexander's comment section. :P
+1 to this. I think the axis upon which this turns is high vs low certainty, with the worry specifically being that low-certainty cause areas (especially in areas EAs are socially involved in) are ripe for self-dealing even with careful policies and the best of intentions.
High-certainty cause areas are global poverty and (yay!) farmed animal welfare*, where it’s clear both that there is a problem and there are things we can do about it. It’s also easier to tell if you’re working on the problem, versus lining your friends’ pockets.
Low-certainty, at-risk cause areas are AI x-risk and longtermist EA movement building, where paths are murky and what’s good for the cause is probably also good for the EAs inside.
Most other cause areas are mid-to-high-certainty.
I personally think AI x-risk is very worth worrying about, and I’m glad people are working on it. But from a PR perspective, I’m constantly nervous about news stories where EAs lavished money on themselves in the name of movement building**, or where some alignment researcher turned out to be a grifter. The risk here is NOT to global development, which everyone already agrees is good. It’s to other high-certainty—but socially unusual—areas, like animal welfare. Those are the areas at risk of being thrown out with the bathwater.
* Most people agree that animals can feel pain, even if they don’t like thinking about it. Furthermore, the evidence and a priori reasoning case for animals feeling pain is very strong.
** My experience has been that EA orgs are above-and-beyond scrupulous about this. Even the castle, which I remain skeptical of, was at least not as crazy as it sounds. Nonetheless, it’s still a worry.
IME most people who want a split want anti-factory-farming on the normal side with the bednets, not on the weird AI side. I'm one of the people you describe, but I seem to be in a minority.
Okay, so what I want isn't the Effective Altruism movement (except for the company, y'all nerds are great). But the places you suggest are even worse fits.
The movement I want is evangelistic, at least to the extent of being, y'know, a movement, not a social club. I'd like every single person in the world to give 1% of their income, and everyone in rich countries who's not particularly poor to give 10%. I'd like a world where that's seen as basic decency, like holding doors open for people.
It doesn't look like quiet work to direct government spending: Sure, that's part of it, directing foreign aid is great, but private donors save a bunch of lives. Wonkery is necessary — I want charity money to actually help, not be wasted — but most donors aren't wonks, we're just paying the wonks to tell us where to send the cash.
It doesn't look like the religion option: I want a big tent. It's perfectly fine if charitypilled people share no common interests, just like the goal of suffragism was votes for all women, not pleasant conversation between suffragists.
The Effective Altruist movement as it exists is a decent fit for this: GiveWell is moving about half a billion USD per year from private donors to excellent charities. Between EA (big social movement, lots of stuff I disapprove of and find annoying to be associated with) and randomistas (very good, but purely academic), I stick with EA for now.
But I'd love a big social movement whose heroes are, like, Melinda Gates and Saloni Dattani. It's not obvious to me whether this is a harder sell, or an easier sell, than EA as it currently exists.
As someone who’s not from the US or an English speaking country, and was just sent a link to GiveWell online, I always thought this *was* their goal. Only later I discovered it was actually an entire subculture and they all apparently lived in a place in the US called “the Bay Area”.
To be clear: I really like some of the philosophical discussion taking place in EA. There are plenty of cool individuals that I would absolutely love to have a beer with. I fully understand if they really like their little subculture and community.
But the focus on the subculture (and the US-Anglo-centrism) seems very ineffective, if what you actually want is for people everywhere to donate. I can really imagine just seeing EA, and thinking “this is some weird elitist rich people shit” (because yes, to almost everyone almost everywhere, programmers in the US are extremely rich) and dipping out. Sometimes I just want to shake them and urge them to consider to please hire a normie marketing expert so they can appeal to normal people. In their defense, I often think this about leftists and other groups I generally like too, so they’re definitely not the only ones doing this.
On a last note, both “effective altruism” and “rationalism” (especially) are like knives to my ears. Both sound so self righteous and snobbish. I’d like a new name.
I'm not sure about the tradeoff here — if we only consider money raised in the short term, recruiting one billionaire is worth putting off millions of world-average-income people. Beyond that, I'm not sure: elitism has obvious downsides, but bigger reach is likely to distort the goal into generic pro-charity messages, no more effective than the average tin-shaker.
Also, the wonkery itself is some weird elitist rich people shit. I've been into Effective Altruism since before it was called that, and I can tell you, it was a hard sell at the time too. People would reject the idea of running experiments, measuring outcomes, doing cost-benefit analyses as cold-hearted and antithetical to charity. So I'm not sure what your marketing expert can do.
Adding AIs and insect suffering makes it weirder, so possibly a harder sell, but has also given a lot of publicity to the boring bednet stuff — and made the idea of a randomised trial sound outright sedate in comparison. Hard to tell if it's helped or hurt!
(This sounds like it contradicts Ozy's "part of the DNA of the movement", but it doesn't really — there were always overlapping groups doing those different things, and the "Effective Altruism" name officially made those efforts "a movement".)
I find the Anglocentrism tedious, but I don't think it's hurting effectiveness, it's just annoying. Also it's less bad in EA than in most majority-American Internet bubbles.
It makes complete sense for effective altruism to reframe its focus on 1) What we know is effective through evidence and data; and 2) What is altruistic.
I respect longtermism but the evidence that donations there save lives is extremely flimsy and no one knows what’s actually effective. And there’s nothing altruistic about trying not to die.
"And there’s nothing altruistic about trying not to die."
This is only true if you have very high P(doom) and think you can personally help save the world.
If you have a P(doom) of 10% and think if you work on AI or so you can reduce the risk by 0.0000001% that will basically not help you personally at all. But in expectation you would save 800 people or so (and also some larger number of possible future people).
So you can absolutely think anti x risk work is altruistic and not selfish.
You can in theory but clearly no one’s buying it. There’s way more convincing and common sense reasons for someone to work on anti x risk than caring about future people.
The US is the country where individual citizens give the most to charity in the world. In my country (which is rich, western and progressive on many issues), “you should donate to charity, and it should be effective” might very well make a subculture. And as most of us are irreligious - the US is orders of magnitude more religious than other rich western countries - doing so outside a religious context seems more, well, effective.
Genuinely just the entire idea of effective charity, and *actually doing something concrete with my life that would actually concretely help others*, instead of just “spreading awareness” or something, got me into EA more than AI or utilitarianism or consequentialism or anything else. I reckon I’m not the only one.
coincidental to this post I'm reading Norbert Wiener's Cybernetics and he talks about what is basically a social group he was in dedicated to talking about and analyzing scientific papers and it makes me wonder the extent to which "people who do intellectual work but for FUN" is a motivating force in technology and world changing thought.
I feel like you're misunderstanding the claim here. The claim is that effective altruist cause areas other than global poverty are actually very ineffective ways to spend money, and therefore "effective altruism" is just a name with no content at this point.
So for instance, your list of assumptions contains some that don't seem to have a lot to do with effectiveness. Why would it be possible to do math if you aren't reasonably certain of the numbers? Why isn't it reasonable to assume that the world will look basically similar to now in 100 years? Why isn't "same species" morally relevant?
If you replace some of those stranger assumptions for more normie ones and (therefore) focus only on charities that actual statistics say are effective for reducing human suffering right now, you get GiveWell and global-poverty-only-EA.
Most movements have more content than can be summed up in two words. Similarly, "feminism is the radical notion that women are people", but if you're like "women are people, just people who are naturally suited for childrearing instead of having careers" you won't be welcome at the Women's March.
If you don't agree with the general assumptions that tend to be made by effective altruists-- such as anti-speciesism and quantitative reasoning-- then you're not an effective altruist. You can still try to improve the world as best you can using your own assumptions, but it's kind of wild to expect this particular movement to cater to you because you made assumptions based on the name.
I don't think you can argue against splitting the movement, and then tell the would-be splitters that we're not effective altruists. I can see the case for either, but I can't follow contradictory advice!
But in what sense is the movement "effective altruism" if it's not about altruism that is effective?
If you disagree with doing math under conditions of great uncertainty, it’s not just EA you need to take that up with, it’s all of finance, CS, statistics, etc.
If you expect the world to be still basically similar to now in 100 years, that would be enormously different from how it has behaved over the last several hundred years, for no apparent reason.
I would consider someone who believed those two things to be quite strange.
(On the other hand I do think same species is morally relevant.)
If a person 100 years ago was to use EA-style rationalism to conclude what would be the best use of their time, just purely on thinking hard rather than looking at RCT-testable things like insecticidal nets (or even "use your eyeballs things" like "don't elect fascists or communists"), there's a very strong chance they would try to do Fabian socialism and eugenics. Fabianism has a somewhat-mixed legacy while eugenics was just a straight loss. Saying "it's uncertain what the future holds, ergo my thing (based on nothing) is correct" is very silly.
Maybe number 12 is "other x-risks" like asteroid impacts, very very bad solar flares strong enough to destroy all electronics, and "that cordyceps thing makes people zombies in several different books/movies/video games".