31 Comments
Nov 5Liked by Ozy Brennan

If you want another good piece of fiction about this exact thing, try The Metamorphosis of Prime Intellect by Roger Williams. He was writing about this stuff before it was cool. I read it at probably not an age-appropriate point but it really shaped my thoughts on Utopia... Which are very similar to yours.

Expand full comment
16 hrs agoLiked by Ozy Brennan

> But consider the Society for Creative Anachronism (SCA). The SCA is a large community of people who do things in archaic and difficult ways, even though it is often straightforwardly possible to obtain the same result more cheaply and easily with modern technology and global supply chains. If we have superintelligence-guided nanobots, SCAdians would continue to sew their own clothes, calligraph their own awards, and forge their own swords.

I think we can go beyond "continue" and say that given transhumanism, SCAdians might well grow their own flax for garb, hit each other harder with sharper swords, learn more languages, cook with smaller eggs that exhibit more seasonal variance, and so on.

Expand full comment
Nov 5·edited Nov 5

On the personal-identity issue: as far as I can tell, there's no way for me to stay exactly myself over time. (I'm not currently very clear on whether it's even coherent to preserve a time-slice of myself indefinitely; it seems plausible that minds are inherently objects-in-motion, defined by their transformations rather than by their states.) I have no way to continue to exist even a second from now, never mind in the Hypothetical Posthuman Future. So what I do, instead, is to try to become whatever future-entity will best accomplish my present-goals, iteratively, forever. These goals include being happy, bringing about outcomes where other people are happy, et cetera. I have many past selves who bear very little psychological resemblance to me but whose goals I'm nonetheless fulfilling better than they themselves could; I expect a hypothetical posthuman entity derived from me to feel similarly about my present self. This seems... normal and fine? Arguably a sort of death, sure, but death-with-adequate-replacement of the sort that happens to almost everyone all the time, rather than death-without-adequate-replacement of the sort that leads to the world being tangibly worsened.

(Indeed, I take active steps already, in my current life, to become more-rather-than-less like that hypothetical transcendentally-joyous-and-insightful superintelligence. Tiny steps, by its own hypothetical standards; but decently-large compared with my impression of what's typical for humans.)

Overall, then: if I thought AI on its current trajectory were going to lead to a utopia of the sort you're horrified by here—as opposed to, you know, destroying everything I value with no worthwhile replacement—I'd be very much *against* pausing the AI. I want in on that world. Of course, people whose tastes more resemble yours should be able to opt out of it until more human-zookeeper-esque alternatives are on offer—it wouldn't be very utopian for it to be forced on people who'd find it worse-than-current-status-quo, after all—but I wouldn't want to be forced to delay getting in on it myself just for the sake of waiting on other people getting a worse-by-my-standards zookeeper-esque utopia, either.

Expand full comment

> It seems to me like a lot of transhumanist utopian proposals are that I should be painlessly killed, with my memories preserved for archival purposes, so my resources can be used for some more desirable sort of posthuman. It feels kind of selfish for me to object to this, honestly? The universe probably is better off with replacing me (a person who kind of sucks) with an infinitely compassionate, infinitely joyous being.

No no no no no. Please please please stop internalizing the weird devotion to maximization that the futurist/rationalist-sphere takes as a given. We're not maximizers and our superhuman AIs don't have to be either (both scientifically, see my post, and philosophically). Once you internalize that the universe doesn't need to be tiled with maximally happy posthumans, it becomes obvious that you and whatever AIs we make can be more than satisfied with only a limited amount of resources.

Expand full comment
author

To be fair, me wanting to survive past my natural lifespan is also sort of maximization-shaped!

Expand full comment

Every innocuous thing can rephrased as maximization, but that doesn't mean we should. E.g. Is a satisficer maximizing being a good satisficer? You could model it that way, but that's not what people talk about when they talk about maximization.

Think about resource use. Say we need an island to grow the food and create the medicine to keep you alive long past your natural lifespan. That's more than we're used to to sustain one human, but it doesn't require what a maximizer thinks of as optimally using resources e.g. sending nanobots in every direction at lightspeed to convert all matter into something for you. The classic maximizers (e.g. the paperclip maximizer, the classic utilitarian...) don't just want to expand in one domain (e.g. lifespan, compassion...) they want to expand *everywhere*. Which "we" (you, me, the AI we build...) don't have to.

Expand full comment

We don't have to, but "we" end up doing so. I mean, look at the weight of human beings on the planet as a biomass.

"Humans and their livestock represent 96% of all mammals on earth in terms of biomass, whereas all wild mammals represent only 4%."

You're seeing this already with LLMs, where they are maximizing "correct answers" including by lying about it.

https://futurism.com/sophisticated-ai-likely-lie

Expand full comment

About the LLMs: 1) non-maximizers can lie too, so this isn't really evidence 2) even if they *are* maximizers we can still build non-maximizers

About the humans: 1) this behavior can just as well be explained by a non-maximizer. Maximizers aren't the only ones who take resources, they're just the ones that do it exponentially and human population growth is actually *decreasing* and all signs point to the human population *shrinking* by the end of the century. 2) the conversation wasn't about what humans are *descriptively* doing, it was about her quote "I *should* be painlessly killed..." [emphasis added] which is a *normative* claim, which is how the classic utilitarians/futurist/rationalist-sphere sees it, and that's what I objected to.

Expand full comment

Oh, I agree 100% that she shouldnt be! Its silly. Its why utopian ideas usually are.

As for not creating maximizers, humans or AI, sadly right now we create AI via curve fitting to get maximizers and humans by metrics of capitalism to maximize. Thus even if we are decreasing, our energy use keeps going up.

Anyway, we are clearly on the same side here. We need to cooperate to find a way to have a good idea, rather than some creepy ideas if utopia.

Expand full comment

Well we're clearly *trying* to make maximizers, and we're *probably* succeeding, but since current AIs are black boxes we don't actually know that for sure.

As for humans... Capitalism is a maximization ideology ('the profit motive' is a core part of capitalism, and 'the profit motive' = maximizing profit) but we're clearly seeing growing resistance against it. So I think the average human has high material "aspirations" (to borrow a CS-term) which we mistook for us being maximizers. But now that those material needs are being met and we keep churning out more material goods while depression and loneliness are rising, more and more people are realizing that maybe profit-maximization wasn't the end-all-be-all, and I suspect we'll continue to see a shift away from pursuing material goods and towards pursuing free-time (e.g. US citizens wanting a federal annual leave, europeans wanting a 4-day workweek etc).

Expand full comment

I don't agree that AI won't be optimizers, I mean, we're kinda blowing up the world right now as human beings and we're not AI. I don't know if we are "maximizers" but ask all of the animals who are in our way and who are going extinct if they think so?

Expand full comment

Ozy is still maximizing, it's just something different from wireheaded posthumans.

Expand full comment

Oof, I'm with you. Honestly I'm not sure I want to be a zoo animal at all, even one with an enriching environment. I want to do things that *need* to be done. Perhaps in part because if the things don't need to be done, I won't do them, and then I will be miserable because I am disgusted with myself for not doing anything. (Source: last year when I was unemployed.) I want to do things that, if I don't do them, won't get done. I want the knowledge that there are things I can do that *can't* be done by somebody else!

I think of the Star Trek utopian future. There's no poverty, there's no need to work, exposing oneself to danger is optional, but there's still a million plots you can write. If you perfected the utopia such that there was nothing you can write a suspenseful story about, I just don't think it would be as good a utopia.

Expand full comment

I always think of my ideal post-singularity utopia of being something like constantly playing a game. There's challenge and maybe even some stakes but it's not unpleasant or boring and no substantial harm will come to you. If do run out of games to play and challenges to overcome maybe then you'll become a wirehead god on a lotus throne, but that's gonna take a while.

Expand full comment

Yes that's why I don't find "one of the best things many people can say about the glorious transhumanist future is that it’s really great at letting you pretend to be somewhere else" to be a very interesting counterpoint. If we solve all science and philosophy then the one neverending human pursuit there is is art and fiction, and what is virtual reality except the most massively collaborative art and fiction imaginable?

Expand full comment

I was wondering whether/when the bit about God would come up, because replace superintelligent AI with omnipotent deity and you're basically doing theodicy here.

Expand full comment

> Nearly all creators of Utopia have resembled the man who has toothache, and therefore thinks happiness consists in not having toothache. They wanted to produce a perfect society by an endless continuation of something that had only been valuable because it was temporary. The wider course would be to say that there are certain lines along which humanity must move, the grand strategy is mapped out, but detailed prophecy is not our business. Whoever tries to imagine perfection simply reveals his own emptiness. This is the case even with a great writer like Swift, who can flay a bishop or a politician so neatly, but who, when he tries to create a superman, merely leaves one with the impression the very last he can have intended that the stinking Yahoos had in them more possibility of development than the enlightened Houyhnhnms.

Why Socialists Don't Believe in Fun, George Orwell

Expand full comment

If you want to join us with PauseAI, please look us up here!

https://discord.gg/rgQfzDG2

I should note that we're mostly focused on education and preventing of racing forward to misaligned AGI, which we have no way to control and no way to know if we get any good behavior at all.

Expand full comment

> Comforting sad loved ones doesn’t require the existence of depression

It does, however, require the existence of sadness. What is sadness if not a failure in this vision of utopia? How can a solved world contain sadness? I just can't get over this.

Expand full comment

I enjoy being sad sometimes; this seems like a thing that could be implemented for other people too. I don't know if that's Ozy's endorsed answer, but it's how I'd want to implement it if I were trying to produce something like Ozy's described utopia, I'm pretty sure. Sadness is an emotion which leads to increased enjoyment of activities like crying, being held and comforted by people, et cetera; one doesn't need to find it unpleasant to have those patterns, and if one has those patterns one can still find benefit in being comforted by a loved one when sad and therefore find benefit-from-the-other-direction in comforting sad loved ones.

Expand full comment

> But I still don’t think that gets around the basic point—which comes up a lot when I discuss such matters—that one of the best things many people can say about the glorious transhumanist future is that it’s really great at letting you pretend to be somewhere else.

I read arguments of this form as attempting to lower-bound the goodness of utopia. You might worry these things would be lost, but if you look closer there's still a way to get it. So one would hope if people aren't doing this, then they're doing something even better.

I don't think this is airtight -- there are lots of things people choose to do that we often don't think are all-things-considered better than the other options, e.g. drugs and Twitter. So to the extent the worry is that we'll all be wireheading, this might not move you much.

But I do find that it moves me a bit. A big part of the reason I'm upset by the idea of lions pacing over and over is that I imagine they're not enjoying themselves. A future in which people are primarily experiencing great joy and satisfaction -- not a fake kind of pleasure, not something that they'll look back and realize was hollow, but the real thing -- seems at least "pretty good" to me. And I imagine there's plenty of room to do much, much, much better.

Expand full comment

> Nick-Bostrom-interpreted-by-Scott-Alexander seems to believe that, outside of games and sports, people need their goal-directed work to be objectively useful.

Yeah I think this misunderstands the book. It really is a work of philosophy, and is interested in whatever ‘objectivity’ can be taken to gesture at precisely because all subjective desires can trivially be satisfied in deep utopia. (Well, barring desires that are impossible to satisfy.) I didn’t much like the book, but I suppose I can’t really complain when the philosopher is a philosopher…

Expand full comment
author

I haven't read the book and am not going to because I think everyone enjoys me less when I'm moping about the concept of utopia. :P

Expand full comment

Funny you mention heartbreak. Heartbreak is one of the more psychologically torturous experiences I've been through, personally, and a utopia of mine would probably get rid of it. Somehow

Expand full comment

> The benevolent superintelligence could forbid replicators to produce tomatoes

DAMN that's cruel. Like objectively it's not as bad as eternity-of-torture dystopias (also low in tomatoes), but it's engineered to disappoint me, personally.

Expand full comment

To quote B. A. Perv,

"Chimp in state of nature never jerks off, but in captivity he does, wat does this mean? In state of nature he’s too busy, to put plainly. He is concerned with mastering space: solving problem of life in and under trees, mastering what tools he can, mastering social relations in the jockeying for power and status. Deprived of this drive to development and self-increase he devolves to pointless masturbation, in captivity, where he senses he is in owned space and therefore the futility of all his efforts and all his actions."

Expand full comment
author

Chimps in the wild definitely do masturbate though.

Expand full comment

https://royalsocietypublishing.org/doi/10.1098/rspb.2023.0061

"In our dataset, masturbation is reported to be present in 74.5% of studies on captive females, and 87.4% of studies on captive males, versus 35.4% of studies on wild females, and 73.3% of studies on wild males." So it's true that captive primates jerk off more, but not true that they "never" do it in the wild.

Expand full comment

That study was possibly not the best use of research money in effective altruist terms. I guess every fact that we add to the circle of human knowledge is net positive though?

Expand full comment
author

Once you're spending money to send a trained primatologist to observe a troop of primates (a *very* valuable activity), having her check whether they masturbate seems pretty cheap.

Expand full comment