21 Comments
Dec 27, 2023·edited Dec 27, 2023

I think the framing of this post goes for a strawmanned version of your opponent. As I understand the discourse, it is not about "spin out specifically global health and development" (which, as you point out, would be weird). As I understand it, it's "spin out specifically AI risk" (see e.g. Will MacAskill here https://forum.effectivealtruism.org/posts/euzDpFvbLqPdwCnXF/university-ea-groups-need-fixing?commentId=Bi3cPKt27bF9GNMJf).

A chief reason for this is that there is a tendency in recent EA for AI-risk and related worries to take over a disproportional amount of attention, particularly among highly-engaged EAs. Two examples which I think were bad:

1. Germany has one of the largest EA communities globally, partly through the work of the Effektiver Altruismus organization being professionalized early on. A few years ago, the organization quite abruptly dissolved without real replacement because the people involved wanted to work on S-risks. This left a pretty substantive gap in the German EA infrastructure up untill somewhat recently.

2. 80.000, one of the most promising EA organizations, has deprioritized global health & development, as well as animal welfare issues, and now seems to focus almost exclusively on longtermist causes (most prominent of which is, of course, AI risk). This strikes me as quite the loss, as it no longer seems sensible to refer people willing to change careers to the famous "EA switching careers" advisory organization, unless you want them to be nudged strongly towards longtermist causes.

You may think that this switch is fine - maybe longtermism, and AI risk as a specific cause, should really dominate the EA movement, because they rank best on utility calculations. But I think there are significant, unaddressed criticisms of those calculations - see e.g. Thorstadts article here https://globalprioritiesinstitute.org/david-thorstad-three-mistakes-in-the-moral-mathematics-of-existential-risk/, which pushes back on the few examples of explicit calculations favouring existential risk reduction, which he believes rest on deeply implausible assumptions. This is if they are underpinned by calculations at all - Thorstadt had to go to great lengths to even find any concrete numbers being provided for the cost-efficiency of risk reduction being greater than that of other shorttermist interventions. I suspect that there is a good chance that other, less epistemically kosher dynamics (https://forum.effectivealtruism.org/posts/euzDpFvbLqPdwCnXF/university-ea-groups-need-fixing) may drive highly engaged EA's to have significantly above-average rates of buying hard into global risk reduction than more "traditional" and boring areas - some of which you outline here: it is certainly more exciting to be doing research on a fancy new technological risk than to have to engage with the drudgery of modern development economics.

I think your post misunderstands the reasons why people wonder if a part of EA should be split into its own movement, and it did not make me update on it. I am unsure if splitting EA into a future risk and a "all other things" branch would be good. I think it is important that EA works on global catastrophic risk reduction, and have encouraged and helped people enter that space. But I do think there is a real tendency for that specific sector to eat up other sectors in attention, funding, talent and infrastructure, I believe that tendency is stronger than currently justified by the positive arguments for risk reduction, and I believe that trend actively harms the Effective Altruist movement currently.

Expand full comment

IME most people who want a split want anti-factory-farming on the normal side with the bednets, not on the weird AI side. I'm one of the people you describe, but I seem to be in a minority.

Okay, so what I want isn't the Effective Altruism movement (except for the company, y'all nerds are great). But the places you suggest are even worse fits.

The movement I want is evangelistic, at least to the extent of being, y'know, a movement, not a social club. I'd like every single person in the world to give 1% of their income, and everyone in rich countries who's not particularly poor to give 10%. I'd like a world where that's seen as basic decency, like holding doors open for people.

It doesn't look like quiet work to direct government spending: Sure, that's part of it, directing foreign aid is great, but private donors save a bunch of lives. Wonkery is necessary — I want charity money to actually help, not be wasted — but most donors aren't wonks, we're just paying the wonks to tell us where to send the cash.

It doesn't look like the religion option: I want a big tent. It's perfectly fine if charitypilled people share no common interests, just like the goal of suffragism was votes for all women, not pleasant conversation between suffragists.

The Effective Altruist movement as it exists is a decent fit for this: GiveWell is moving about half a billion USD per year from private donors to excellent charities. Between EA (big social movement, lots of stuff I disapprove of and find annoying to be associated with) and randomistas (very good, but purely academic), I stick with EA for now.

But I'd love a big social movement whose heroes are, like, Melinda Gates and Saloni Dattani. It's not obvious to me whether this is a harder sell, or an easier sell, than EA as it currently exists.

Expand full comment

It makes complete sense for effective altruism to reframe its focus on 1) What we know is effective through evidence and data; and 2) What is altruistic.

I respect longtermism but the evidence that donations there save lives is extremely flimsy and no one knows what’s actually effective. And there’s nothing altruistic about trying not to die.

Expand full comment

The US is the country where individual citizens give the most to charity in the world. In my country (which is rich, western and progressive on many issues), “you should donate to charity, and it should be effective” might very well make a subculture. And as most of us are irreligious - the US is orders of magnitude more religious than other rich western countries - doing so outside a religious context seems more, well, effective.

Genuinely just the entire idea of effective charity, and *actually doing something concrete with my life that would actually concretely help others*, instead of just “spreading awareness” or something, got me into EA more than AI or utilitarianism or consequentialism or anything else. I reckon I’m not the only one.

Expand full comment

coincidental to this post I'm reading Norbert Wiener's Cybernetics and he talks about what is basically a social group he was in dedicated to talking about and analyzing scientific papers and it makes me wonder the extent to which "people who do intellectual work but for FUN" is a motivating force in technology and world changing thought.

Expand full comment

I feel like you're misunderstanding the claim here. The claim is that effective altruist cause areas other than global poverty are actually very ineffective ways to spend money, and therefore "effective altruism" is just a name with no content at this point.

So for instance, your list of assumptions contains some that don't seem to have a lot to do with effectiveness. Why would it be possible to do math if you aren't reasonably certain of the numbers? Why isn't it reasonable to assume that the world will look basically similar to now in 100 years? Why isn't "same species" morally relevant?

If you replace some of those stranger assumptions for more normie ones and (therefore) focus only on charities that actual statistics say are effective for reducing human suffering right now, you get GiveWell and global-poverty-only-EA.

Expand full comment

Maybe number 12 is "other x-risks" like asteroid impacts, very very bad solar flares strong enough to destroy all electronics, and "that cordyceps thing makes people zombies in several different books/movies/video games".

Expand full comment