Disclaimer: For ease of writing, I say “effective altruism says this” or “effective altruists believe that.” In reality, effective altruism is a diverse movement, and many effective altruists believe different things.
This is genuinely the best, and most persuasive explanation of the accelerationist/doomer argument I've read. Thank you, *thank you* for not saying, "Everyone knows this stuff, you're dumb for disagreeing, and if you want more details please read these eight ten-thousand word, rambling manifestos from a crazy person." In my experience, that is the default response, and it's so dismissible and foolish.
...You know what? I'm not even going to argue. My initial reaction is to point out how seldom trends continue indefinitely in any field, and try to poke holes in the whole idea. But I recently made an observation that comments on an article tend to provoke immediate responses over good responses. The correct thing to do would be to think about this and see if my doubts hold water before reflexively dismissing it.
It’s not clear to me how “trends seldom continue indefinitely in any field” contradicts the article? My read is that it already says trends seldom continue, and therefore we need to consider off-trend possibilities when planning for the future.
Let's try again, cuz my first response was way off topic.
Essentially I'm rejecting the Accelerando option as a possibility - the very fact that we're on a hockey stick graph is strong initial evidence to me that all the growth is about to stop.
And I'm rejecting the doomer version of this - that when the growth stops it'll be catastrophic.
My heuristic is to assume we'll muddle through - we'll do bad planning but then good crisis management and ultimately be fine, though not as fine as we could have been. My heuristic is that people have always said "The sky is falling" or "Utopia is nigh," followed by, "Yeah, I know people have always said this but this time it's really really real because of <post hoc rationale>." And my heuristic is to reject that line of thought if it's based largely on speculation and not on empirical observation - which much of EA and rationalist thought is not.
Ozy makes a good argument that this is a bad heuristic, though.
Ok, this makes sense. What do you think of the various climate change scenarios? The main thing obviously fits your framework really well, but I’m about things like ocean algae blooms and other not-quite-existential threats that *could* result there. Seems to me to still be low probability, but some have more solid evidentiary bases than AI X-risk.
Pretty sure those would still fall into the non-existential category, but they could be very close, if they happen.
Here I have to admit some real weaknesses in my worldview that do bother me:
There *have* to be scenarios that will resist crisis management if they aren't planned for in advance, some catastrophically so. The space of potential problems just must include that. So while the odds of any given scenario being that case seem very low to me, the odds that eventually one will be are basically 100% on a long enough timescale.
Some experts seem to think that's climate change. I am skeptical (not of climate change, but that climate change will be an extinction-level event), but open to being proven wrong. I suspect we'll have enough time to adapt to a horrible new normal, even in the worst cases. I'm open to the idea that this will result in resource wars and civil unrest (in fact, it looks to me like it already has), but each day will look mostly like the one before it for most people.
With that said, I know very little about the algal bloom scenario, except that it played a big part in some prior extinction events. So again, talking from a place of profound ignorance here.
I'm not sure how to reconcile that. I can't help but believe acting on any stated threat needs to be calibrated to the data we have on it, or some probability spread of potential outcomes. And I also can't help but believe acting in that way will ultimately wipe us all out. So, uh... yeah I don't know.
Yes, there is a clear disagreement between you and the doomers if your p(doom) is much lower than their. That’s a separate line of reasoning, which I don’t have much to contribute to. As for this article:
There’s a place where you’re talking past each other, and a place where you may be underestimating the strangeness of what could be coming down the line.
Talking past each other: it’s important to consider timescales here. The scenarios being considered here kind of hand wave hundreds of years of growth in the Star Wars scenario, before moving on to consider those implications. I think a lot of critics focus in on those hundreds of years as “relatively normal,” and don’t realize that what’s being mentioned is the period afterward. Don’t think that’s you, but I want to clarify because I was getting a whiff.
Underestimating the strangeness: a world with Star Wars-like technology would look much, much more different to our world than Star Wars does. Sci fi in general has to be relatable and understandable to us, whereas the future does not. The ‘nothing crazy’ future growth scenario could still be very crazy. Even just perfecting existing technologies, like cybernetic limbs, or heroin, could have massive implications.
More importantly for this discussion, that world could eventually hit Malthusian limits. Or it would eventually run out of economic growth. Those both would likely result in worlds a normal person wouldn’t consider “normal.”
Re-reading your response, think you realize all of this. What I don’t think is that what you’re envisioning is still within the bounds of most people’s normal.
A big part of the issue, I think, is the question of what exactly is the shape and equations of state of that steeply upward sloping curve, which is genuinely hard to measure.
I think there's a huge tendency for people to apply a romantic and even quasi-messianic view to exponential increase. I don't know how *incomprehensible* our society would be to a premodern peasant, as strange as it would be, and the comparisons to nonhuman animals seem like category errors.
I am of course like Malthus a Catholic, and so it is not particularly remarkable to me that he views non-reproductive sex acts with disgust and horror, but I have always seen it rather remarkable that he didn't seem to have thought much of other means of regulating population growth -- the built-in method in Catholic societies would seem to be adjusting what fraction of people become monks or nuns instead of marrying.
I think it's remarkably ironic that everybody talks about Malthus being blindsided by *productivity getting decoupled from land* when we are suddenly facing a fertility crisis that will probably leave future generations dominated by very atypical communities that have fertility above replacement.
My big criticisms of economic measures is that money, in these measurements, is *especially* fake. Part of my argument that we're getting Star Wars (if we can solve the fertility problem) or A Canticle for Leibowitz (if we can't) is that actually materially measurable stuff like availability of energy is arguably not going up the way money is. Some of this talk about the exponential curve seems stuck in the dreams of the 1950s when everybody was going to have a nuclear reactor in their basement or we were going to start colonizing space already.
I *expect* either Star Wars or A Canticle for Leibowitz, I *want* Star Wars, and regard Accelerando with horror. I think that AI is going more in the direction of giving us Star Wars style droids than the apotheosis-concept (although that might be a good way to get sucker punched by a religious apocalypse that I would expect but most people wouldn't.)
> But global health and development? Look at the last word! “Development” is economic growth.
It may be the case that global health interventions have positive effects on global development (though when Bob tried to gauge the scientific consensus, he found that most institutional development economists are aid skeptics: https://bobjacobs.substack.com/p/is-effective-altruism-neocolonial), "global health and development" EAs wholly focus on global health interventions with development as a possibly fortunate side-effect, so it's a stretch to say they're implicitly saying anything about future economic growth.
>Malthus’s basic insight is: the population can’t grow exponentially forever, because the land only produces so much food. Therefore, in the long run, births have to be equal to deaths.
I feel like history is short on known cases of people living right at their land's carrying capacity. There have definitely been places and times where people are living right at the edge of their civilization's ability to produce food, but the limiting factor in those cases is more about the average food production per person being barely subsistence-level. If you doubled the size of a hunter-gatherer tribe, wouldn't they just need to spread out a bit?
There are plenty of other disasters (including blights on crops) that get more and more likely with higher population and/or higher population density. But framing those as necessary and inevitable relies on this intuitive-but-unproven claim that a civilization that gets too big runs out of farmland, and in practice I think that's very rare. More people means more innovators, more explorers, and more robustness, no matter the tech level.
The real Malthusian catastrophes have been when the British invade and take your food while saying "so sad that this inevitable Malthusian catastrophe is happening to you."
Theres a lot of confusion here involving the phrase "subsistence level". It is used by different people in different ways. Ecologists, sometimes historians, anthropologists take it to mean "a level where of anything even slightly bad will happen, bunch of people will starve". But Malthus and most economic historians use it to mean "births roughly equal deaths", which you can get at a level which is well above "brink of starvation"
That's interesting! But I'm not sure I'm tracking--certainly there've been populations in rough equilibrium at various points, for various reasons, but Malthus wasn't being vacuous. For his theory to be correct, any population not in equilibrium needs to eventually hit a hard limit on resources, right? So to support it that's what we'd need.
A population not in equilibrium simply moves towards equilibrium over time. If income is low (high) then deaths exceed (less than) births, population shrinks (rises) and so income rises (shrinks) and long run equilibrium is restored. But this level of income is not necessarily anything like "close to starvation"
Malthus did use "catastrophic" language in a few places and some, especially outside of Economics, have latched onto that, but for world to be Malthusian you dont need famine or war, you just need the death rate to depend on income. The adjustment to long rub equilibrium can be "smooth"
Maybe? In my head, the Assyrians hit a sustained drought more than a population cap (unless you're talking about earlier than that?). I hadn't heard about Roman Italy having agriculture problems.
I agree that a lot of the Green Revolution isn't long-term sustainable. That's the thing about relying on human ingenuity to solve problems as they arise--it's always possible that we'll run out of good ideas at some point.
I very honestly am not sure that Effective Altruist is a good label. Effective Altruists seem very resistant to the notion that what you call yourself matters a lot.
nice! I like these more digestible summaries of common EA worldviews. I think EA discourse often moves quickly into esoteric edge cases without explaining the basics enough—and basics are what I find most compelling. unsurprisingly I don’t agree with everything here (eg., givewell donations reflecting some far future interest) but great to see someone distill it :))
"And there’s not a magic line drawn somewhere between serious, respectable economic growth and science-fiction economic growth. It’s the same thing! By donating to GiveWell top charities, you are implicitly saying “I think Accelerando/Star Wars is going to be fantastic, we should get there faster.”"
I find this pretty hard to believe, and I don't even think I want Star Wars rather than a secret third thing.
As a sanity-check, one type of science-fiction technology would be better biotech for trans people and furries. I'm pretty sure you only get that if you directly work on that specific kind of biotech, rather than e.g. deworming being an effective way of getting it.
Larger population with sufficient means to pay for the technology means more demand. Larger population with education to develop the technology means more supply.
The only thing I disagree in this is identifying it with "effectice altruism" as if somehow its particular to whatever that is. But your # II. for example isn't ae, its more or less just your basic economic history. Im not an AE... i think... but I agree with all that's in there.
So is everyone who's into charity and economic growth implicitly interested in Star Wars or Accelarando? Are the people at Charity Navigator? Are all mainstream economists?
As per usual, it depends. The Iroquois? Maybe not, I’m not sure. Mesoamerican peasants? Yes, obviously. And there were far more of the latter. Intensive subsistence farming to feed the appetites of nobles is everywhere and always hellish.
EA applies business constructs (goals and metrics) to complex systems that are vastly different from businesses.
Business constructs? You think the New Deal didn’t have goals and metrics? The Great Society? Literally any successful state project ever carried out? Of course not. They had metrics, and frankly rather crude ones - simple variations on “line go up”. But it was a different line, and that makes all the difference. They were crude and dumb and poorly aimed and nonetheless they worked.
Is EA a neoliberal wet dream? Yes, obviously. EA politics, like all American politics, mostly fucking sucks. But there are children dying of malaria, and they are buying bednets.
devolution of community power over its surplus
Well, yes - because it’s not your surplus. Not our surplus, assuming your sense of “community” is something like “the American people”. (If it’s even more parochial than that, well, that makes the issue even worse. Jefferson’s dream is dead, and good riddance to it. It’s never coming back). It is the surplus, and while it “naturally” (i.e. by default, given our present institutions) concentrates disproportionately in first-world hands, there is nothing particularly just about this. I’m not going to go full maoist-third-worldist and claim we’re not exploited in any way whatsoever, that’s plainly not the case, but we are far, far better off than the hundreds of thousands of children dying of malaria every single year.
If you want to say that’s not your concern, fine, be that way, the new right is very welcoming to all those hostiles humani generis. Which is to say, all localist communitarians: when there are children dying of malaria, they’re functionally the same. They’ve adopted half of hippiedom already, I’m sure they can be convinced to take the rest. But don’t pretend it’s some principled New Left wisdom we’re too autistic to understand. It’s New Left, I’ll grant you that - which is to say utopian. There are children dying of malaria: spend your money on them, or spend it somewhere else. That’s all there is to it.
[mod voice] Chill, please, there's no reason to go around referring to people as members of the New Right unless there is some reason to believe they are.
This is genuinely the best, and most persuasive explanation of the accelerationist/doomer argument I've read. Thank you, *thank you* for not saying, "Everyone knows this stuff, you're dumb for disagreeing, and if you want more details please read these eight ten-thousand word, rambling manifestos from a crazy person." In my experience, that is the default response, and it's so dismissible and foolish.
...You know what? I'm not even going to argue. My initial reaction is to point out how seldom trends continue indefinitely in any field, and try to poke holes in the whole idea. But I recently made an observation that comments on an article tend to provoke immediate responses over good responses. The correct thing to do would be to think about this and see if my doubts hold water before reflexively dismissing it.
It’s not clear to me how “trends seldom continue indefinitely in any field” contradicts the article? My read is that it already says trends seldom continue, and therefore we need to consider off-trend possibilities when planning for the future.
Let's try again, cuz my first response was way off topic.
Essentially I'm rejecting the Accelerando option as a possibility - the very fact that we're on a hockey stick graph is strong initial evidence to me that all the growth is about to stop.
And I'm rejecting the doomer version of this - that when the growth stops it'll be catastrophic.
My heuristic is to assume we'll muddle through - we'll do bad planning but then good crisis management and ultimately be fine, though not as fine as we could have been. My heuristic is that people have always said "The sky is falling" or "Utopia is nigh," followed by, "Yeah, I know people have always said this but this time it's really really real because of <post hoc rationale>." And my heuristic is to reject that line of thought if it's based largely on speculation and not on empirical observation - which much of EA and rationalist thought is not.
Ozy makes a good argument that this is a bad heuristic, though.
Ok, this makes sense. What do you think of the various climate change scenarios? The main thing obviously fits your framework really well, but I’m about things like ocean algae blooms and other not-quite-existential threats that *could* result there. Seems to me to still be low probability, but some have more solid evidentiary bases than AI X-risk.
Pretty sure those would still fall into the non-existential category, but they could be very close, if they happen.
Here I have to admit some real weaknesses in my worldview that do bother me:
There *have* to be scenarios that will resist crisis management if they aren't planned for in advance, some catastrophically so. The space of potential problems just must include that. So while the odds of any given scenario being that case seem very low to me, the odds that eventually one will be are basically 100% on a long enough timescale.
Some experts seem to think that's climate change. I am skeptical (not of climate change, but that climate change will be an extinction-level event), but open to being proven wrong. I suspect we'll have enough time to adapt to a horrible new normal, even in the worst cases. I'm open to the idea that this will result in resource wars and civil unrest (in fact, it looks to me like it already has), but each day will look mostly like the one before it for most people.
With that said, I know very little about the algal bloom scenario, except that it played a big part in some prior extinction events. So again, talking from a place of profound ignorance here.
I'm not sure how to reconcile that. I can't help but believe acting on any stated threat needs to be calibrated to the data we have on it, or some probability spread of potential outcomes. And I also can't help but believe acting in that way will ultimately wipe us all out. So, uh... yeah I don't know.
Yes, there is a clear disagreement between you and the doomers if your p(doom) is much lower than their. That’s a separate line of reasoning, which I don’t have much to contribute to. As for this article:
There’s a place where you’re talking past each other, and a place where you may be underestimating the strangeness of what could be coming down the line.
Talking past each other: it’s important to consider timescales here. The scenarios being considered here kind of hand wave hundreds of years of growth in the Star Wars scenario, before moving on to consider those implications. I think a lot of critics focus in on those hundreds of years as “relatively normal,” and don’t realize that what’s being mentioned is the period afterward. Don’t think that’s you, but I want to clarify because I was getting a whiff.
Underestimating the strangeness: a world with Star Wars-like technology would look much, much more different to our world than Star Wars does. Sci fi in general has to be relatable and understandable to us, whereas the future does not. The ‘nothing crazy’ future growth scenario could still be very crazy. Even just perfecting existing technologies, like cybernetic limbs, or heroin, could have massive implications.
More importantly for this discussion, that world could eventually hit Malthusian limits. Or it would eventually run out of economic growth. Those both would likely result in worlds a normal person wouldn’t consider “normal.”
Re-reading your response, think you realize all of this. What I don’t think is that what you’re envisioning is still within the bounds of most people’s normal.
so what I'm hearing is, the gay in FALGSC is a key functional element,
A big part of the issue, I think, is the question of what exactly is the shape and equations of state of that steeply upward sloping curve, which is genuinely hard to measure.
I think there's a huge tendency for people to apply a romantic and even quasi-messianic view to exponential increase. I don't know how *incomprehensible* our society would be to a premodern peasant, as strange as it would be, and the comparisons to nonhuman animals seem like category errors.
I am of course like Malthus a Catholic, and so it is not particularly remarkable to me that he views non-reproductive sex acts with disgust and horror, but I have always seen it rather remarkable that he didn't seem to have thought much of other means of regulating population growth -- the built-in method in Catholic societies would seem to be adjusting what fraction of people become monks or nuns instead of marrying.
I think it's remarkably ironic that everybody talks about Malthus being blindsided by *productivity getting decoupled from land* when we are suddenly facing a fertility crisis that will probably leave future generations dominated by very atypical communities that have fertility above replacement.
My big criticisms of economic measures is that money, in these measurements, is *especially* fake. Part of my argument that we're getting Star Wars (if we can solve the fertility problem) or A Canticle for Leibowitz (if we can't) is that actually materially measurable stuff like availability of energy is arguably not going up the way money is. Some of this talk about the exponential curve seems stuck in the dreams of the 1950s when everybody was going to have a nuclear reactor in their basement or we were going to start colonizing space already.
I *expect* either Star Wars or A Canticle for Leibowitz, I *want* Star Wars, and regard Accelerando with horror. I think that AI is going more in the direction of giving us Star Wars style droids than the apotheosis-concept (although that might be a good way to get sucker punched by a religious apocalypse that I would expect but most people wouldn't.)
> But global health and development? Look at the last word! “Development” is economic growth.
It may be the case that global health interventions have positive effects on global development (though when Bob tried to gauge the scientific consensus, he found that most institutional development economists are aid skeptics: https://bobjacobs.substack.com/p/is-effective-altruism-neocolonial), "global health and development" EAs wholly focus on global health interventions with development as a possibly fortunate side-effect, so it's a stretch to say they're implicitly saying anything about future economic growth.
>And Our World In Data’s chart is understanding the case, because it starts in the year
I think you mean understates
"And Our World In Data’s chart is understanding the case" -- I think you mean "understating"
But, like, **was** Malthus really ever right?
>Malthus’s basic insight is: the population can’t grow exponentially forever, because the land only produces so much food. Therefore, in the long run, births have to be equal to deaths.
I feel like history is short on known cases of people living right at their land's carrying capacity. There have definitely been places and times where people are living right at the edge of their civilization's ability to produce food, but the limiting factor in those cases is more about the average food production per person being barely subsistence-level. If you doubled the size of a hunter-gatherer tribe, wouldn't they just need to spread out a bit?
There are plenty of other disasters (including blights on crops) that get more and more likely with higher population and/or higher population density. But framing those as necessary and inevitable relies on this intuitive-but-unproven claim that a civilization that gets too big runs out of farmland, and in practice I think that's very rare. More people means more innovators, more explorers, and more robustness, no matter the tech level.
The real Malthusian catastrophes have been when the British invade and take your food while saying "so sad that this inevitable Malthusian catastrophe is happening to you."
Theres a lot of confusion here involving the phrase "subsistence level". It is used by different people in different ways. Ecologists, sometimes historians, anthropologists take it to mean "a level where of anything even slightly bad will happen, bunch of people will starve". But Malthus and most economic historians use it to mean "births roughly equal deaths", which you can get at a level which is well above "brink of starvation"
That's interesting! But I'm not sure I'm tracking--certainly there've been populations in rough equilibrium at various points, for various reasons, but Malthus wasn't being vacuous. For his theory to be correct, any population not in equilibrium needs to eventually hit a hard limit on resources, right? So to support it that's what we'd need.
A population not in equilibrium simply moves towards equilibrium over time. If income is low (high) then deaths exceed (less than) births, population shrinks (rises) and so income rises (shrinks) and long run equilibrium is restored. But this level of income is not necessarily anything like "close to starvation"
Malthus did use "catastrophic" language in a few places and some, especially outside of Economics, have latched onto that, but for world to be Malthusian you dont need famine or war, you just need the death rate to depend on income. The adjustment to long rub equilibrium can be "smooth"
Maybe? In my head, the Assyrians hit a sustained drought more than a population cap (unless you're talking about earlier than that?). I hadn't heard about Roman Italy having agriculture problems.
I agree that a lot of the Green Revolution isn't long-term sustainable. That's the thing about relying on human ingenuity to solve problems as they arise--it's always possible that we'll run out of good ideas at some point.
I very honestly am not sure that Effective Altruist is a good label. Effective Altruists seem very resistant to the notion that what you call yourself matters a lot.
nice! I like these more digestible summaries of common EA worldviews. I think EA discourse often moves quickly into esoteric edge cases without explaining the basics enough—and basics are what I find most compelling. unsurprisingly I don’t agree with everything here (eg., givewell donations reflecting some far future interest) but great to see someone distill it :))
"And there’s not a magic line drawn somewhere between serious, respectable economic growth and science-fiction economic growth. It’s the same thing! By donating to GiveWell top charities, you are implicitly saying “I think Accelerando/Star Wars is going to be fantastic, we should get there faster.”"
I find this pretty hard to believe, and I don't even think I want Star Wars rather than a secret third thing.
As a sanity-check, one type of science-fiction technology would be better biotech for trans people and furries. I'm pretty sure you only get that if you directly work on that specific kind of biotech, rather than e.g. deworming being an effective way of getting it.
Larger population with sufficient means to pay for the technology means more demand. Larger population with education to develop the technology means more supply.
This assumes it's demand-constrained or supply-constrained rather than ideology-constrained.
The only thing I disagree in this is identifying it with "effectice altruism" as if somehow its particular to whatever that is. But your # II. for example isn't ae, its more or less just your basic economic history. Im not an AE... i think... but I agree with all that's in there.
So is everyone who's into charity and economic growth implicitly interested in Star Wars or Accelarando? Are the people at Charity Navigator? Are all mainstream economists?
[mod voice] Chill, please, I try to have a light hand with moderation but there's no reason to go around insulting other commenters.
Were native Americans impoverished?
As per usual, it depends. The Iroquois? Maybe not, I’m not sure. Mesoamerican peasants? Yes, obviously. And there were far more of the latter. Intensive subsistence farming to feed the appetites of nobles is everywhere and always hellish.
EA applies business constructs (goals and metrics) to complex systems that are vastly different from businesses.
Business constructs? You think the New Deal didn’t have goals and metrics? The Great Society? Literally any successful state project ever carried out? Of course not. They had metrics, and frankly rather crude ones - simple variations on “line go up”. But it was a different line, and that makes all the difference. They were crude and dumb and poorly aimed and nonetheless they worked.
Is EA a neoliberal wet dream? Yes, obviously. EA politics, like all American politics, mostly fucking sucks. But there are children dying of malaria, and they are buying bednets.
devolution of community power over its surplus
Well, yes - because it’s not your surplus. Not our surplus, assuming your sense of “community” is something like “the American people”. (If it’s even more parochial than that, well, that makes the issue even worse. Jefferson’s dream is dead, and good riddance to it. It’s never coming back). It is the surplus, and while it “naturally” (i.e. by default, given our present institutions) concentrates disproportionately in first-world hands, there is nothing particularly just about this. I’m not going to go full maoist-third-worldist and claim we’re not exploited in any way whatsoever, that’s plainly not the case, but we are far, far better off than the hundreds of thousands of children dying of malaria every single year.
If you want to say that’s not your concern, fine, be that way, the new right is very welcoming to all those hostiles humani generis. Which is to say, all localist communitarians: when there are children dying of malaria, they’re functionally the same. They’ve adopted half of hippiedom already, I’m sure they can be convinced to take the rest. But don’t pretend it’s some principled New Left wisdom we’re too autistic to understand. It’s New Left, I’ll grant you that - which is to say utopian. There are children dying of malaria: spend your money on them, or spend it somewhere else. That’s all there is to it.
[mod voice] Chill, please, there's no reason to go around referring to people as members of the New Right unless there is some reason to believe they are.