Species-Appropriate Natural Behaviors And Utopia
In favor of minimally competent utopia design
Before the 1960s, people made zoos that seemed like they had everything an animal would want: lots of food, lots of water, comfortable temperatures, no predators, medical treatment of their diseases. Surely these zoos, they thought, would lead to a utopia of extraordinary animal bliss.
Well, no.
Actually, they went insane.
You might wonder how you can tell if a tiger or an elephant or a sifaka monkey is insane, since you can’t exactly get a psychiatrist to perform a diagnostic interview. Animals performed repetitive behaviors known as stereotypies. Stereotypies can be disturbing to watch. Lions spend hours pacing out a circle ten feet in diameter. Bears throw their heads back, over and over again, the exact same way, for the entire time they’re awake. Birds pluck out their feathers until they’re bald.
Starting in the 1960s, zookeepers began to realize that animals need something beyond food and water and treatment of their diseases: they need enrichment. Specifically, they need the opportunity to perform “species-appropriate behaviors” or “natural behaviors” or to meet “ethological needs.”
Carnivores need to hunt. Social animals need to form social groups, of a particular size, in which they form status hierarchies and make friends and enemies. Beavers need to build dams. Some species-appropriate natural behaviors get very specific: chickens need to throw dust on their wings.
Humans have species-appropriate natural behaviors too: singing, dancing, telling stories, making art, walking through nature, playing games, running, producing food, cooking, making friends, falling in love, raising children, doing politics.
I was thinking of this when I read Scott Alexander’s review of Deep Utopia, which contains numerous passages like this:
Our deep utopia will know how to wirehead people safely. So worst-case scenario, if you absolutely can’t figure out anything else to do, you live in perfect bliss forever. Bostrom urges us not to reflexively turn up our noses at this outcome. Wireheading grosses us out because our best approximations for it - drugs, porn, etc - are tawdry and shallow. Actually-good wireheading would be neither. You could walk through the woods at sunrise, experiencing a combination of the joy you felt gazing on the face of your newborn first child, the excitement Einstein experienced upon seeing the first glimmers of relativity, and the ecstasy of St. Teresa as she gazed upon the face of God…
If you’re only concerned about avoiding wireheading, you could spend Utopia appreciating art. The Deep Utopians could hack your brain to give you the critical refinement of Harold Bloom or some other great art-appreciator, and you could spend eternity reading the Great Books and having extremely perspicacious opinions about them. Plenty of scholars do that today, and nobody thinks their lives are meaningless. In fact, why stop at Bloom? The Utopians could hack you into some kind of superbeing who can appreciate superart as far beyond current humanity as Shakespeare is beyond a tree frog.
If you live a billion years, do you run out of Art to appreciate? Not just in the sense of exhausting humanity’s store of Art (superintelligent art generator AIs can always add more art faster than you can exhaust it), but in the sense of exhausting the space of possible Art? Bostrom is unsure. He suggests that since we’re only using Art to satisfy ourselves that we’re not cheating - rather than demanding that the Art itself be interesting - we can change our interestingness criteria a little whenever we run out of Art, helping us distinguish ever finer gradations.
(All of the above goes for appreciating the profound truths of Science too. We assume that all worthwhile science has already been discovered - or that everything left requires a particle accelerator the size of the Milky Way plus an AI with a brain the size of Jupiter to interpret the results. You will not be asked to help, but you can still try to contemplate the already-discovered truths and bask in their elegance.)
At which point I give a little scream.
Later, he says:
The only sentences in Deep Utopia that I appreciated without reservation were a few almost throwaway references to zones where utopia had been suspended. Suppose you want to gain meaning by climbing Everest, but it wouldn’t count unless there’s a real risk of death. That is, if you fall in a crevasse, you can’t have the option to call on the nanobot-genies to teleport you safely back to your bed. Bostrom suggests - then doesn’t follow up on - utopia-free zones where the nanobot-genies are absent. If you die in a utopia-free-zone, you die in real life…
At this point, why not just go all the way? Do the Deep Utopians have their Amish? When you get tired of being blissed out all the time, can you go to their version of Lancaster, Pennsylvania, and do heavy farm labor, secure in the knowledge that you’ll go hungry if the corn doesn’t grow?
Or is that the wrong way to think about it? Is it less of an Amish farming village than a virtual reality sim? Want meaning, struggle, and passion? Become Napoleon for a century. Go in the experience machine, and - with only the vaguest memory of your past existence, or none at all - get born on the island of Corsica in 1769 and see what happens. Spend a while relaxing from posthumanity in the body of a 5’7, IQ 135 Frenchman.
Scott Alexander makes the good point that most people in 18th century France don’t get to be Napoleon, and even Napoleon’s life was often boring and unpleasant in a way that posthumanity could prevent. But I still don’t think that gets around the basic point—which comes up a lot when I discuss such matters—that one of the best things many people can say about the glorious transhumanist future is that it’s really great at letting you pretend to be somewhere else.
Some of my feelings here are that I’m rather provincial about personal identity. I don’t think that a transcendentally superhuman intelligence with a brain the size of a planet counts as me. I am also suspicious that a being that spends all their time feeling unimaginable joy, insight, and divine ecstasy isn’t me either—my brain can’t feel the most intense version of all three of those emotions at the same time to begin with, and after a certain amount of incredible intense pleasure I am done and would like to take a nap. It seems to me like a lot of transhumanist utopian proposals are that I should be painlessly killed, with my memories preserved for archival purposes, so my resources can be used for some more desirable sort of posthuman.
It feels kind of selfish for me to object to this, honestly? The universe probably is better off with replacing me (a person who kind of sucks) with an infinitely compassionate, infinitely joyous being. But I like being alive. If I’m going to have to die for the sake of a better future, people should at least allow me to feel nobly self-sacrificial about it, instead of pretending it’s good for my sake.
But I think a lot of my problem—if we assume, as Nick-Bostrom-as-interpreted-by-Scott-Alexander seems to generally be doing, that Deep Utopia is populated by baseline humans—is that utopia designers are like 1950s zookeepers. We got you food, water, comfortable temperatures, no predators, and medical treatment of your diseases! We even got you an arbitrary amount of art created by superintelligent art producers AND super-heroin! What more are you asking for?
Well: singing, dancing, telling stories, making art, walking through nature, playing games, running, producing food, cooking, making friends, falling in love, raising children, doing politics.
I am a tiger, and my utopia must contain lots of watermelons for me to hunt.
Scott Alexander talks about the Deep Utopian Amish. I think this intuitively feels more satisfying than regular Deep Utopia, because it feels like a place where our ethological needs might be met. Among the Deep Utopian Amish, I can comfort my friend about her breakup, because we haven’t eliminated sadness and social conflict. I can sing in a group with people I love, instead of infinite music being created without my intervention by our AI overlords. I can bake sourdough for dinner without my friends all going “wow, this is so much worse than the perfect sourdough available from our nanobot-powered replicators.”
Similarly, I don’t think the appeal of elaborate pre-Singularity video games is mostly about any particular desire to live before the Singularity. It’s because sports and games and crafts and mutually helpful friendships definitely exist before the Singularity, and people seem very squirrely on the subject of whether they’d exist afterward.
According to Nick-Bostrom-interpreted-by-Scott-Alexander, the primary activities available in utopia are consuming things: walking in nature, appreciating art, appreciating science, drinking tea.1 But humans have an ethological need, not only for consumption, but for production.
Not all humans, to be sure. No human ethological need is shared by all humans: I myself don’t comprehend the appeal of games or sports. But most people feel their most fulfilled lives involve some kind of goal-directed work: fixing cars, painting, programming, crocheting, gardening, gathering mushrooms, making homemade cheese, winning races, improving at Magic: the Gathering, running tabletop roleplaying game campaigns. And most people feel their most fulfilled lives involve being able to help others, if only through comforting and advising their friends who are in trouble.
Nick-Bostrom-interpreted-by-Scott-Alexander seems to believe that, outside of games and sports, people need their goal-directed work to be objectively useful. Once there are superintelligence-guided nanobots, that’s it for the human ethological need for production, and we need to figure out some way to cope without it. I don’t think this is true.
Even Nick-Bostrom-interpreted-by-Scott-Alexander agrees that people might try to get very good at games and sports, which is proof of concept that goal-directed work doesn’t have to be objectively useful. But consider the Society for Creative Anachronism (SCA). The SCA is a large community of people who do things in archaic and difficult ways, even though it is often straightforwardly possible to obtain the same result more cheaply and easily with modern technology and global supply chains. If we have superintelligence-guided nanobots, SCAdians would continue to sew their own clothes, calligraph their own awards, and forge their own swords.
It’s probable, of course, that many people aren’t SCAdians. But, even in that case, there’s no reason to assume that the human ethological need for production will be forever unmet and we just have to get used to it. For example, the superintelligence could just… not… do things?
So people won’t garden, crochet, or fix cars if they have nanobots to produce food, clothes, and transit for them. The benevolent superintelligence could forbid replicators to produce tomatoes, crocheted sweaters, or mint-condition 1974 Ford Mavericks.
Similarly, to allow people to help others, we don’t need to go all the way to:
We have to go stricter! What about “you have to make a mark on the world” or ”you have to make a positive difference”? Here I start to find Bostrom’s solutions a little gimmicky. You could have Person A pledge to be sad unless Person B climbs Everest (if they can’t honor this pledge on their own, they could reverse wirehead into being sad). Then Person B has to climb Everest in order to make a difference and save Person A! Why would Person A agree to this scheme? Because they’re also making a positive difference in the world, by helping provide their fellow non-cheaters with purpose!
Alternately, the superintelligence could allow people to have a normal range of emotions and opinions about each other and the world, and reserve ‘superintelligence provides the single magic piece of advice that fixes everything’ for extreme cases. Then we can all support each other through our breakups, long-running feuds, mild phobias, writer’s blocks, quests to climb Mount Everest, etc.
Superintelligence help isn’t a binary. The superintelligence can function as a safety net, while still allowing people to have meaningful work and real friendships. Comforting sad loved ones doesn’t require the existence of depression. Baking sourdough bread is still good if everyone has enough to eat.
To visualize such a utopia, I recommend checking out Alicorn’s excellent short story Will.
It’s true that the superintelligence could comfort your friend and make sourdough bread better than you could. I think in this case, however, we’re more similar to tigers than we’d like to admit. Just as the tiger doesn’t think “I could just get this meat without having to attack the pumpkin,” most people, not being very philosophically minded, would accept the superintelligence’s behavior as a brute fact about the universe. And for the philosophically minded? They’ve been dealing with “God could solve all our problems with a wave of His hand, but doesn’t” for millennia, and I see no reason for them to be less satisfied with their answers if they don’t have to explain away smallpox.
Reading people’s speculations about utopia has a tendency to make me support Pause AI, in the hopes of delaying utopia as long as possible.2 But, ultimately, I do have some hope. All we need, for the world to be okay, is for our superintelligent omnibenevolent AI overlords to be at least as good at making an okay life for humans as a modern zookeeper.
You might also get to play games, take an active role in a religious ritual, and remember your grandmother, none of which fit neatly in the production/consumption binary.
Ideally until 2073, a number I chose for principled reasons and not because it is my expected death year.
If you want another good piece of fiction about this exact thing, try The Metamorphosis of Prime Intellect by Roger Williams. He was writing about this stuff before it was cool. I read it at probably not an age-appropriate point but it really shaped my thoughts on Utopia... Which are very similar to yours.
> But consider the Society for Creative Anachronism (SCA). The SCA is a large community of people who do things in archaic and difficult ways, even though it is often straightforwardly possible to obtain the same result more cheaply and easily with modern technology and global supply chains. If we have superintelligence-guided nanobots, SCAdians would continue to sew their own clothes, calligraph their own awards, and forge their own swords.
I think we can go beyond "continue" and say that given transhumanism, SCAdians might well grow their own flax for garb, hit each other harder with sharper swords, learn more languages, cook with smaller eggs that exhibit more seasonal variance, and so on.