12 Comments
Nov 3, 2023Liked by Ozy Brennan

I feel like this overlaps a lot with the debate between normie nonthreatening emergency preparedness and doomsday prepper militia fanatic activities. (Full disclosure, I'm somewhat in favor of both of those.)

A few thoughts:

- It's almost impossible to predict how likely weird science fiction stuff is to happen. Error bars inevitably cover many orders of magnitude. This by itself kind of militates against what I've thought of as the EA virtues of doing things that can be measured.

- It's pretty clear that especially over centuries, normalcy bias is not a good idea.

- With things where the likelihood or impact is hard to predict, so there's lots of uncertainty and no measuring, it's very easy for self-serving bias or actual political corruption to lead to wildly overestimating them.

- Single-point probability optimization is probably not appropriate for things where the distribution itself is unknown.

Expand full comment
Nov 3, 2023Liked by Ozy Brennan

Revolutionary anarchism might be an example of a movement that fulfills both criteria while socially being as far from the longtermist scene as possible.

* They’re longtermist in the blunt literal sense - they're engaged in an arduous uphill struggle towards a revolution that will being turmoil and violence in the short run in the hope that it will eventually lead to an indefinitely long period of peace and justice.

* The theory they're working off is a form of social science fiction, since the society they’re aspire to build - a large, stable, highly-developed anarchist civilization - is radically different from any society we know of.

Expand full comment

Agreed.

And for whatever it's worth, I hope you continue refusing to get into weird science fiction shit. Because most "longtermist" giving looks like wasted money to me.

Many of the conditions that lead to charities wasting their resources (like unclear goals, money earmarked for ineffective projects, impossible-to-measure effects, and badly aligned career incentives) become almost unavoidable when you give to weird science fiction shit.

Expand full comment

I think this is missing something. The sci fi divide is real, but engineered pandemics are only *slightly* sci fi, and people in EA working on natural pandemics are afaict still mostly on the "longtermist" side socially. And if we're counting gene drives as sci fi, what about the ones targeted at malaria?

I think an equally (or more) important dividing line is X-risk vs. Other Stuff. A small or uncertain probability of near-term extinction is kind of bad from a short term perspective, because you might die, but arguably less bad than malaria which can also kill you and might be easier to fight. But from a philosophical longtermist angle, extinction is very very very bad, much worse than "billions of deaths" suggests.

These can kind of substitute for each other; if someone thinks AI is 90% likely to kill everyone, they don't particularly need to worry about extinction to be concerned. If they are hardcore philosophical longtermists, even a 1% risk is extremely concerning.

Somewhat relatedly:

> Many people also believe that, if all goes well, advanced artificial intelligence will cause people, the vast majority of whom are currently alive, to live forever in utopia

I don't think "the vast majority of whom are currently alive" reflects the expectations of most people who believe AI has utopian upsides. Did you mean "including the vast majority of those currently alive", or do you really mean human population wouldn't grow significantly in these scenarios? I think most of the people in the sci fi camp expect the benefits of utopia to mostly go to people who don't exist yet.

Expand full comment

I think there's something here! At least in our current situation, it reeeally doesn't make that much sense to draw a boundary based on when the beneficiaries are born. A few thoughts your post sparked:

1. Maybe a slightly more natural distinction is: do people _currently_ suffer from the problems you're trying to fix, or are they problems that haven't affected anyone yet, but you think will be very big in the future? This captures a little more of the distinct flavor of "longtermist" causes.

2. This starts as nitpicking, but then maybe gets more useful: It doesn't seem correct to me to say that philosophical longtermism has won, any more than it seems like "philosophical animal-ism" has won. There's such a huge difference between thinking that we should be nicer to animals and try not to pollute so much, and thinking that industrial chicken / fish / pig / etc. farming is an ongoing horror at almost incomprehensible scale and intensity. On the other hand, I do think the most respectable position is that torturing animals is bad -- so _has_ philosophical animal-ism won, or not?

Maybe what we're really left with -- in animal welfare, longtermism, and global health/poverty -- is not best characterized as disagreement about who is a moral patient, but as a combination of (1) difference in factual views about what's happening in the world, (2) a difference in judgment about what attitude is appropriate in light of these facts, and (2a) a difference in judgment of how we should prioritize different moral patients? This feels like a more fruitful way to draw sociological distinctions, at least within the bubble where Care/Harm is the main ballgame.

(Historically, and when talking to people far outside the bubble, it probably was/is useful to say "well it's because I care about foreigners / animals / future people," or "I think morality is mostly about effects on the patient instead of judgment of the agent," and they'd say "ohhhh, see I DON'T think those things, that explains our difference." That's just not the communication situation we're usually in these days I guess.)

3. (Just musing, doesn't seem that useful:) I'm not sure how "longtermism" actually came into use in EA. But it's easy to imagine that around 2007 when these kinds of terms were getting locked in, there were a lot more people who thought AI risk and (engineered) pandemic risk were super far-off -- COVID hadn't happened yet, and deep learning basically didn't work yet, so even mentioning AGI put you squarely in the "thinks aliens are real" reference class. So maybe that explains some of the history? Especially rhetorically, I remember it seeming important in ~2011 to explain to people that yes, I _do_ understand that AI isn't real yet and Microsoft Word isn't going to wake up and kill anyone, and I'm not saying that I have secret knowledge that AGI will happen soon -- I just don't care much about how many generations down the line it'll bite. I can imagine trying to frontload that explanatory work into a term like longtermism. (Doesn't make it right!)

Aaaanyway, thanks for the post, it was thought-provoking :)

Expand full comment

And worse, people often conflate the two in order to dismiss one. "I don't care about far-future people, therefore I don't care about AI risk" is not actually a sound argument, but calling AI risk "longtermist" makes it seem so. I think people really need to stop doing that. Actual longtermism is stuff like preparing for the long reflection, not preventing a pandemic in 10 years.

Expand full comment

Agreed that the salient dimension is not really time, but possibility. Normie-EA shit is stuff that is known to occur today, mid-"longtermist" shit is stuff that has some known probability of occurring over time, and far-"longtermist" shit includes stuff that not only might not ever happen, but might in fact turn out, in retrospect, to have been impossible all along.

This does, as Hoffnung notes, pose serious foundational problems for quantitative practices. When we're quantifying over contingent uncertainty, we kind of all agree on what's going on. (Pace, you know, everything.) When we're quantifying over uncertainties of necessity, what exactly are we doing? Maybe electrons suffer or maybe they don't, but presumably once the question is answered it will turn out to always have been true (perhaps even in ~all conceivable universes?) Which is a weird risk to try to handicap.

Expand full comment

I think this post hints at something true. That the actual philosophical beliefs someone has on whether there is a discount rate on well-being and whether it is good to create happy people aren't as important as the vage "vibes" that people have in deciding whether someone is a longtermist or not.

But I think for people who actually take realist philosophy (especially ethics) seriously, the important thing, that decides if they are longtermists or neartermists really is some abstract philosophical argument.

When I think about donating to some animal welfare or global health charity, or to some longtermist cause, I actually think about whether helping 100 billion future people or people alive now is better. Some people probably like thinking about weird sci-fi stuff, but doing something against nuclear war, trying to make institutions better, or giving money to some boring science or policy think-thank really isn't very sci-fi.

Especially if one thinks that X risk isn't very high. When I try to think about doing something about AI or pandemics, the calculus only works out if I care about making one million future people in 200 years appear as much as I care about saving one million poor people alive right now from death. And even if I don't feel it in my bones, I believe it is the right thing to care about both groups equally, so that is what I try to do and why I am a longtermist.

Expand full comment