Discussion about this post

User's avatar
malte's avatar

I think the framing of this post goes for a strawmanned version of your opponent. As I understand the discourse, it is not about "spin out specifically global health and development" (which, as you point out, would be weird). As I understand it, it's "spin out specifically AI risk" (see e.g. Will MacAskill here https://forum.effectivealtruism.org/posts/euzDpFvbLqPdwCnXF/university-ea-groups-need-fixing?commentId=Bi3cPKt27bF9GNMJf).

A chief reason for this is that there is a tendency in recent EA for AI-risk and related worries to take over a disproportional amount of attention, particularly among highly-engaged EAs. Two examples which I think were bad:

1. Germany has one of the largest EA communities globally, partly through the work of the Effektiver Altruismus organization being professionalized early on. A few years ago, the organization quite abruptly dissolved without real replacement because the people involved wanted to work on S-risks. This left a pretty substantive gap in the German EA infrastructure up untill somewhat recently.

2. 80.000, one of the most promising EA organizations, has deprioritized global health & development, as well as animal welfare issues, and now seems to focus almost exclusively on longtermist causes (most prominent of which is, of course, AI risk). This strikes me as quite the loss, as it no longer seems sensible to refer people willing to change careers to the famous "EA switching careers" advisory organization, unless you want them to be nudged strongly towards longtermist causes.

You may think that this switch is fine - maybe longtermism, and AI risk as a specific cause, should really dominate the EA movement, because they rank best on utility calculations. But I think there are significant, unaddressed criticisms of those calculations - see e.g. Thorstadts article here https://globalprioritiesinstitute.org/david-thorstad-three-mistakes-in-the-moral-mathematics-of-existential-risk/, which pushes back on the few examples of explicit calculations favouring existential risk reduction, which he believes rest on deeply implausible assumptions. This is if they are underpinned by calculations at all - Thorstadt had to go to great lengths to even find any concrete numbers being provided for the cost-efficiency of risk reduction being greater than that of other shorttermist interventions. I suspect that there is a good chance that other, less epistemically kosher dynamics (https://forum.effectivealtruism.org/posts/euzDpFvbLqPdwCnXF/university-ea-groups-need-fixing) may drive highly engaged EA's to have significantly above-average rates of buying hard into global risk reduction than more "traditional" and boring areas - some of which you outline here: it is certainly more exciting to be doing research on a fancy new technological risk than to have to engage with the drudgery of modern development economics.

I think your post misunderstands the reasons why people wonder if a part of EA should be split into its own movement, and it did not make me update on it. I am unsure if splitting EA into a future risk and a "all other things" branch would be good. I think it is important that EA works on global catastrophic risk reduction, and have encouraged and helped people enter that space. But I do think there is a real tendency for that specific sector to eat up other sectors in attention, funding, talent and infrastructure, I believe that tendency is stronger than currently justified by the positive arguments for risk reduction, and I believe that trend actively harms the Effective Altruist movement currently.

Expand full comment
Eschatron9000's avatar

IME most people who want a split want anti-factory-farming on the normal side with the bednets, not on the weird AI side. I'm one of the people you describe, but I seem to be in a minority.

Okay, so what I want isn't the Effective Altruism movement (except for the company, y'all nerds are great). But the places you suggest are even worse fits.

The movement I want is evangelistic, at least to the extent of being, y'know, a movement, not a social club. I'd like every single person in the world to give 1% of their income, and everyone in rich countries who's not particularly poor to give 10%. I'd like a world where that's seen as basic decency, like holding doors open for people.

It doesn't look like quiet work to direct government spending: Sure, that's part of it, directing foreign aid is great, but private donors save a bunch of lives. Wonkery is necessary — I want charity money to actually help, not be wasted — but most donors aren't wonks, we're just paying the wonks to tell us where to send the cash.

It doesn't look like the religion option: I want a big tent. It's perfectly fine if charitypilled people share no common interests, just like the goal of suffragism was votes for all women, not pleasant conversation between suffragists.

The Effective Altruist movement as it exists is a decent fit for this: GiveWell is moving about half a billion USD per year from private donors to excellent charities. Between EA (big social movement, lots of stuff I disapprove of and find annoying to be associated with) and randomistas (very good, but purely academic), I stick with EA for now.

But I'd love a big social movement whose heroes are, like, Melinda Gates and Saloni Dattani. It's not obvious to me whether this is a harder sell, or an easier sell, than EA as it currently exists.

Expand full comment
18 more comments...

No posts