There are two different things people refer to as “longtermism.”
One is the philosophical idea of longtermism, which is the belief that it’s very important to protect future generations. Philosophical longtermism is a very popular, mainstream belief. The UN issued a statement about it in 1997. The well-being of future generations is used as a justification for worrying about the national debt, Social Security, the environment, and climate change, alongside many other issues. You can even purchase longtermist soap:
Seventh Generation was born in the late 1980s in Burlington, VT. At the time, we were a niche mail-order catalog business that specialized in energy-, water-, and resource-saving products. Our name is inspired by an ancient Iroquois philosophy which instructs that “in our every deliberation, we must consider the impact of our decisions on the next seven generations.” This credo guides us in every product we make, and every action we take. It inspires our belief in a seventh generation to come.
By the time I am using a longtermist product to clean my countertops, I think we can safely say philosophical longtermism won.
But when most people talk about longtermism, they don’t really mean the philosophy of culturally appropriative soap companies. They’re referring to a sociological divide in the effective altruist community.
Longtermist cause areas are those that sound science fiction. Most centrally, longtermism refers to existential risk, especially bioengineered pandemics and advanced artificial intelligence. However, some longtermists also worry about the suffering of digital minds, anti-aging research, nanotechnology, rebuilding industrial society after a catastrophe, and forecasting research.1 Conversely, neartermist cause areas are those which do not sound science fictional, such as preventing malaria, making sure people have clean drinking water, treating clubfoot, convincing people to stop eating meat, passing laws against animal cruelty on farms, and building an animal advocacy movement in Asia.2 In fact, I have observed a field transitioning from longtermist to neartermist in real time: the wild-animal welfare movement shifted from longtermist gene drives to reduce wild animal suffering to neartermist rodent birth control.
Whether something sounds science-fictional has nothing to do with whether the benefits come now or later. Many longtermist causes pay off more quickly than many neartermist causes. The vast majority of the benefits of deworming charities come decades from now: children who grew up without worm infections are slightly smarter and healthier and therefore earn more money throughout their lives. It will take many decades for the Asian animal advocacy movement to begin to improve conditions for a significant number of animals in Asia.
Conversely, many people believe that bioengineered pandemics and advanced artificial intelligence pose a significant risk of killing everyone in ten to fifteen years. Many people also believe that, if all goes well, advanced artificial intelligence will cause people, the vast majority of whom are currently alive, to live forever in utopia. On fairly reasonable assumptions, caring more about present people should cause you to devote more of your energy to sociologically longtermist causes.
I have suggested to a friend who does longtermist grantmaking that they should troll everyone by dropping two million dollars on malaria nets. People who aren’t sick with malaria are going to be more ambitious, more clever, and harder-working—so they can start businesses, reform their governments, found charities that meet local needs, and generally make their countries better. Future generations will have enough to eat and medical care for their children. Non-corrupt democratic governments are more peaceful and less likely to commit atrocities. With all children educated, we are more likely to find key scientific insights that give us more control over the world and ethical insights that tell us what to do with it. Malaria prevention might actually be one of the best ways for people in the developed world to help developing countries become rich: it’s extraordinarily difficult for outsiders to impose functioning markets or governments on a country, but you can create the conditions for people to do it on their own.
But, of course, there’s a reason this is “a troll move” and not “normal grantmaking behavior, a manifestation of the good judgment and expertise for which my friend was hired.” Longtermist grantmakers are not supposed to give to normie effective altruism charities, the ones that are advertised on center-left podcasts whose hosts are in favor of the war on cars, scientific study preregistration, and Trans Lives Mattering. They are supposed to give to weird science-fiction shit. Everyone knows it. It’s just that, for historical reasons, we’re all acting like weird science-fiction shit involves caring about future generations and normie shit involves caring about present people.
I am worried this is coming off as a bit snarky. There is nothing wrong with prioritizing weird science fiction shit. We live in a time of great technological progress; nuclear bombs and Moon landings once sounded as science-fictional as bioengineered plagues and nanotech do today. And the sociological use of the term “longtermism” is getting at a real difference. As you get more heavily involved in effective altruism, you are more likely to be interested in weird science fiction shit.3 We need a word for Weird Science Fiction Shit Effective Altruists to distinguish them from the ordinary GiveWell donor or your humble blogger.4
I just think it’s important to bear in mind that the sociological category is basically unrelated to the philosophical category. Convincing people of philosophical longtermism has little to do with convincing them that the world’s biggest problems sound like science fiction novel premises.
Which is not in and of itself science fiction but often involves making predictions about science fictional things.
The big exception is the development of cultivated meat, which in spite of its science-fictional nature winds up being neartermist because everything else people are doing about farmed animals is firmly neartermist.
Unless you’re an effective animal advocate, in which case you are more likely to care about increasingly unappealing groups of animals.
Who has been refusing to get into weird science fiction shit out of sheer stubbornness.
I feel like this overlaps a lot with the debate between normie nonthreatening emergency preparedness and doomsday prepper militia fanatic activities. (Full disclosure, I'm somewhat in favor of both of those.)
A few thoughts:
- It's almost impossible to predict how likely weird science fiction stuff is to happen. Error bars inevitably cover many orders of magnitude. This by itself kind of militates against what I've thought of as the EA virtues of doing things that can be measured.
- It's pretty clear that especially over centuries, normalcy bias is not a good idea.
- With things where the likelihood or impact is hard to predict, so there's lots of uncertainty and no measuring, it's very easy for self-serving bias or actual political corruption to lead to wildly overestimating them.
- Single-point probability optimization is probably not appropriate for things where the distribution itself is unknown.
Revolutionary anarchism might be an example of a movement that fulfills both criteria while socially being as far from the longtermist scene as possible.
* They’re longtermist in the blunt literal sense - they're engaged in an arduous uphill struggle towards a revolution that will being turmoil and violence in the short run in the hope that it will eventually lead to an indefinitely long period of peace and justice.
* The theory they're working off is a form of social science fiction, since the society they’re aspire to build - a large, stable, highly-developed anarchist civilization - is radically different from any society we know of.