36 Comments

> But consider the Society for Creative Anachronism (SCA). The SCA is a large community of people who do things in archaic and difficult ways, even though it is often straightforwardly possible to obtain the same result more cheaply and easily with modern technology and global supply chains. If we have superintelligence-guided nanobots, SCAdians would continue to sew their own clothes, calligraph their own awards, and forge their own swords.

I think we can go beyond "continue" and say that given transhumanism, SCAdians might well grow their own flax for garb, hit each other harder with sharper swords, learn more languages, cook with smaller eggs that exhibit more seasonal variance, and so on.

Expand full comment

If you want another good piece of fiction about this exact thing, try The Metamorphosis of Prime Intellect by Roger Williams. He was writing about this stuff before it was cool. I read it at probably not an age-appropriate point but it really shaped my thoughts on Utopia... Which are very similar to yours.

Expand full comment

On the personal-identity issue: as far as I can tell, there's no way for me to stay exactly myself over time. (I'm not currently very clear on whether it's even coherent to preserve a time-slice of myself indefinitely; it seems plausible that minds are inherently objects-in-motion, defined by their transformations rather than by their states.) I have no way to continue to exist even a second from now, never mind in the Hypothetical Posthuman Future. So what I do, instead, is to try to become whatever future-entity will best accomplish my present-goals, iteratively, forever. These goals include being happy, bringing about outcomes where other people are happy, et cetera. I have many past selves who bear very little psychological resemblance to me but whose goals I'm nonetheless fulfilling better than they themselves could; I expect a hypothetical posthuman entity derived from me to feel similarly about my present self. This seems... normal and fine? Arguably a sort of death, sure, but death-with-adequate-replacement of the sort that happens to almost everyone all the time, rather than death-without-adequate-replacement of the sort that leads to the world being tangibly worsened.

(Indeed, I take active steps already, in my current life, to become more-rather-than-less like that hypothetical transcendentally-joyous-and-insightful superintelligence. Tiny steps, by its own hypothetical standards; but decently-large compared with my impression of what's typical for humans.)

Overall, then: if I thought AI on its current trajectory were going to lead to a utopia of the sort you're horrified by here—as opposed to, you know, destroying everything I value with no worthwhile replacement—I'd be very much *against* pausing the AI. I want in on that world. Of course, people whose tastes more resemble yours should be able to opt out of it until more human-zookeeper-esque alternatives are on offer—it wouldn't be very utopian for it to be forced on people who'd find it worse-than-current-status-quo, after all—but I wouldn't want to be forced to delay getting in on it myself just for the sake of waiting on other people getting a worse-by-my-standards zookeeper-esque utopia, either.

Expand full comment

Oof, I'm with you. Honestly I'm not sure I want to be a zoo animal at all, even one with an enriching environment. I want to do things that *need* to be done. Perhaps in part because if the things don't need to be done, I won't do them, and then I will be miserable because I am disgusted with myself for not doing anything. (Source: last year when I was unemployed.) I want to do things that, if I don't do them, won't get done. I want the knowledge that there are things I can do that *can't* be done by somebody else!

I think of the Star Trek utopian future. There's no poverty, there's no need to work, exposing oneself to danger is optional, but there's still a million plots you can write. If you perfected the utopia such that there was nothing you can write a suspenseful story about, I just don't think it would be as good a utopia.

Expand full comment

> It seems to me like a lot of transhumanist utopian proposals are that I should be painlessly killed, with my memories preserved for archival purposes, so my resources can be used for some more desirable sort of posthuman. It feels kind of selfish for me to object to this, honestly? The universe probably is better off with replacing me (a person who kind of sucks) with an infinitely compassionate, infinitely joyous being.

No no no no no. Please please please stop internalizing the weird devotion to maximization that the futurist/rationalist-sphere takes as a given. We're not maximizers and our superhuman AIs don't have to be either (both scientifically, see my post, and philosophically). Once you internalize that the universe doesn't need to be tiled with maximally happy posthumans, it becomes obvious that you and whatever AIs we make can be more than satisfied with only a limited amount of resources.

Expand full comment

To be fair, me wanting to survive past my natural lifespan is also sort of maximization-shaped!

Expand full comment

Every innocuous thing can rephrased as maximization, but that doesn't mean we should. E.g. Is a satisficer maximizing being a good satisficer? You could model it that way, but that's not what people talk about when they talk about maximization.

Think about resource use. Say we need an island to grow the food and create the medicine to keep you alive long past your natural lifespan. That's more than we're used to to sustain one human, but it doesn't require what a maximizer thinks of as optimally using resources e.g. sending nanobots in every direction at lightspeed to convert all matter into something for you. The classic maximizers (e.g. the paperclip maximizer, the classic utilitarian...) don't just want to expand in one domain (e.g. lifespan, compassion...) they want to expand *everywhere*. Which "we" (you, me, the AI we build...) don't have to.

Expand full comment

We don't have to, but "we" end up doing so. I mean, look at the weight of human beings on the planet as a biomass.

"Humans and their livestock represent 96% of all mammals on earth in terms of biomass, whereas all wild mammals represent only 4%."

You're seeing this already with LLMs, where they are maximizing "correct answers" including by lying about it.

https://futurism.com/sophisticated-ai-likely-lie

Expand full comment

About the LLMs: 1) non-maximizers can lie too, so this isn't really evidence 2) even if they *are* maximizers we can still build non-maximizers

About the humans: 1) this behavior can just as well be explained by a non-maximizer. Maximizers aren't the only ones who take resources, they're just the ones that do it exponentially and human population growth is actually *decreasing* and all signs point to the human population *shrinking* by the end of the century. 2) the conversation wasn't about what humans are *descriptively* doing, it was about her quote "I *should* be painlessly killed..." [emphasis added] which is a *normative* claim, which is how the classic utilitarians/futurist/rationalist-sphere sees it, and that's what I objected to.

Expand full comment

Oh, I agree 100% that she shouldnt be! Its silly. Its why utopian ideas usually are.

As for not creating maximizers, humans or AI, sadly right now we create AI via curve fitting to get maximizers and humans by metrics of capitalism to maximize. Thus even if we are decreasing, our energy use keeps going up.

Anyway, we are clearly on the same side here. We need to cooperate to find a way to have a good idea, rather than some creepy ideas if utopia.

Expand full comment

Well we're clearly *trying* to make maximizers, and we're *probably* succeeding, but since current AIs are black boxes we don't actually know that for sure.

As for humans... Capitalism is a maximization ideology ('the profit motive' is a core part of capitalism, and 'the profit motive' = maximizing profit) but we're clearly seeing growing resistance against it. So I think the average human has high material "aspirations" (to borrow a CS-term) which we mistook for us being maximizers. But now that those material needs are being met and we keep churning out more material goods while depression and loneliness are rising, more and more people are realizing that maybe profit-maximization wasn't the end-all-be-all, and I suspect we'll continue to see a shift away from pursuing material goods and towards pursuing free-time (e.g. US citizens wanting a federal annual leave, europeans wanting a 4-day workweek etc).

Expand full comment

I don't agree that AI won't be optimizers, I mean, we're kinda blowing up the world right now as human beings and we're not AI. I don't know if we are "maximizers" but ask all of the animals who are in our way and who are going extinct if they think so?

Expand full comment

Ozy is still maximizing, it's just something different from wireheaded posthumans.

Expand full comment

I want the perfect sourdough bread that the AI makes. I want 100 novels written exactly to my specific tastes by an AI who can write better novels than any person could. I will be UPSET if a superintelligent AI denies me those things just so someone else can feel like their bread and books are needed by the world.

Yes, it is important that people get to contribute meaningful things to each other that are genuinely needed. But if the reason those things are "needed" is because the AI is leaving them up to you even though it could totally do it better, then that DOESN'T COUNT as something that's actually needed. It has become mere busy-work. Busy-work isn't meaningful.

What I want to contribute are things that truly COULDN'T be contributed by anyone who isn't me. That's how my contribution can be the most meaningful—when it's intrinsically tied to who I am. Shen its value comes from its social content.

Perhaps I write a novel whose literary qualities are much less than what someone more talented could do (or what a superintelligence could do), but the value in it comes from the fact that you are getting to know ME by reading my novel. Even if a superintelligent AI knows me better than I know myself and could express that to you better—for one, I haven't given the AI permission to tell people personal things about me; that understanding is MINE to give. For another, it means something that I chose to reveal myself in this way, through writing. It has social content.

Yes, I want the AI's superior novels, and I also want the novels written by regular people, because I value those two types of things in two different ways. One is for the literary quality of the book, and one is for the social meanings.

Expand full comment

> Nearly all creators of Utopia have resembled the man who has toothache, and therefore thinks happiness consists in not having toothache. They wanted to produce a perfect society by an endless continuation of something that had only been valuable because it was temporary. The wider course would be to say that there are certain lines along which humanity must move, the grand strategy is mapped out, but detailed prophecy is not our business. Whoever tries to imagine perfection simply reveals his own emptiness. This is the case even with a great writer like Swift, who can flay a bishop or a politician so neatly, but who, when he tries to create a superman, merely leaves one with the impression the very last he can have intended that the stinking Yahoos had in them more possibility of development than the enlightened Houyhnhnms.

Why Socialists Don't Believe in Fun, George Orwell

Expand full comment

I always think of my ideal post-singularity utopia of being something like constantly playing a game. There's challenge and maybe even some stakes but it's not unpleasant or boring and no substantial harm will come to you. If do run out of games to play and challenges to overcome maybe then you'll become a wirehead god on a lotus throne, but that's gonna take a while.

Expand full comment

Yes that's why I don't find "one of the best things many people can say about the glorious transhumanist future is that it’s really great at letting you pretend to be somewhere else" to be a very interesting counterpoint. If we solve all science and philosophy then the one neverending human pursuit there is is art and fiction, and what is virtual reality except the most massively collaborative art and fiction imaginable?

Expand full comment

I agree with this post. Further, an AI "at least as good at making an okay life for humans as a modern zookeeper" is not even good enough for me. I don't think those animals are very happy.

I never understood animal instinct until I birthed my babies naturally and breastfed them. Doing those was fulfilling and satisfying in a way that nobody talks about. They make personality-changing, behavior-changing hormones and neurotransmitters. They just feel right.

Not everyone gets to do those. Not even every female whose brains are set up to potentially benefit from matrescence gets to do it at all, much less in a natural way that doesn't block those hormonal and neurological processes. But not everyone gets to do "singing, dancing, telling stories, making art, walking through nature, playing games, running, producing food, cooking, making friends, falling in love, raising children, doing politics" either. Despite not being black and white, these things are worth celebrating.

I share your skepticism about transhumanist utopian thought experiments. Even more because I don't think the people/AIs who are modeling what is fulfilling understand the most fulfilling things of my life. Not even obstetrical curricula account for them (though midwifery curricula usually do). Art, literature, and music don't reflect their importance either - at least not since matrifocal neolithic art and sculpture. Science still knows little enough about how to optimize their set and setting that they're frequently an accidental casualty of hospital policy. I don't think utopian experience/relationship simulators would know where to start - if they even bothered to start there.

Expand full comment

Funny you mention heartbreak. Heartbreak is one of the more psychologically torturous experiences I've been through, personally, and a utopia of mine would probably get rid of it. Somehow

Expand full comment

I was wondering whether/when the bit about God would come up, because replace superintelligent AI with omnipotent deity and you're basically doing theodicy here.

Expand full comment

If you want to join us with PauseAI, please look us up here!

https://discord.gg/rgQfzDG2

I should note that we're mostly focused on education and preventing of racing forward to misaligned AGI, which we have no way to control and no way to know if we get any good behavior at all.

Expand full comment

To quote B. A. Perv,

"Chimp in state of nature never jerks off, but in captivity he does, wat does this mean? In state of nature he’s too busy, to put plainly. He is concerned with mastering space: solving problem of life in and under trees, mastering what tools he can, mastering social relations in the jockeying for power and status. Deprived of this drive to development and self-increase he devolves to pointless masturbation, in captivity, where he senses he is in owned space and therefore the futility of all his efforts and all his actions."

Expand full comment

Chimps in the wild definitely do masturbate though.

Expand full comment

https://royalsocietypublishing.org/doi/10.1098/rspb.2023.0061

"In our dataset, masturbation is reported to be present in 74.5% of studies on captive females, and 87.4% of studies on captive males, versus 35.4% of studies on wild females, and 73.3% of studies on wild males." So it's true that captive primates jerk off more, but not true that they "never" do it in the wild.

Expand full comment

That study was possibly not the best use of research money in effective altruist terms. I guess every fact that we add to the circle of human knowledge is net positive though?

Expand full comment

Once you're spending money to send a trained primatologist to observe a troop of primates (a *very* valuable activity), having her check whether they masturbate seems pretty cheap.

Expand full comment

I deeply appreciate this essay, thank you

Expand full comment

Woman at the Edge of Time by Marge Piercy has a utopian society pretty similar to what you lay out; they have automated factories that produce things that are just inherently not fun to produce (though what those things actually are is debatable) and harvest their own crops, build their own houses, create their own clothing, produce their own art and movies, and so on. It's basically the base level of post-scarcity required for minimal suffering with most labor still being done by humans. I think that book is about as close as it gets to articulating exactly the kind of future I want to build, bar some small details, and given that, I find it fascinating that it was written in the 70s, decades before I was born. I've always thought labor was necessary for utopia, not because I really see any inherent value in it, but because I simply think we're never actually going to eliminate the need for it, but that book does a good job of creating a world in which the prospect of spending three months of the year harvesting crops is not so horrible or even actively preferable to letting a machine do it.

Expand full comment

Just came across this in the wonderful Freefall recently and thought of this piece. Critiques of publish-or-perish-driven research design in experimental psychology aside, "the abbatoirs of paradise" is a hell of a phrase. http://freefall.purrsia.com/ff4200/fc04145.png

Expand full comment