Against the concept of moral circles
A common metaphor is the idea of the expanding moral circle. At first, people only cared about the inner circle: family and friends. Many people have expanded their moral circles to include strangers in the same country as them, then people in different countries. Some people have even expanded them to include animals, the environment, and people in the far future. I’ve used this metaphor myself. Moral circles are even the governing motif for my favorite song.
Unfortunately, I don’t think expanding circles is an accurate analogy to explain what’s going on when some people care about farmed animals or future people or people in different countries.
For one thing, people often agree that a group is made up of moral patients, and still don’t act like they care about them. Sometimes people have factual disagreements about whether a group is possible to help. For example, someone might agree that people in low-income countries are moral patients, but think that it’s impossible to know whether a charity can do good unless you’re on the ground and able to check it yourself. Her view might be wrong, but it doesn’t imply not caring about people in the developing world per se.
Sometimes longtermists say that, unlike ordinary people, they have “expanded the moral circle” to include future people. But most people who aren’t philosophers care about future people. That’s why environmental groups give speeches about protecting nature for future generations, the American Constitution secures the blessings of liberty to ourselves and our posterity, and racists want to secure a future for white children. The dispute between self-identified longtermists and everyone else is about whether we should help future people through developing aligned superintelligence or through some other method, like building a national park system or passing the Bill of Rights or starting a race war.
Less flatteringly, apparent moral circle expansion often comes from changed incentives. Often, people do evil simply because it would be annoying and inconvenient to do good. Slaveowners often owned slaves because if they freed their slaves they’d be poor and people would think they were weird. Men sometimes raped their wives because they liked hurting their wives, but often raped their wives because orgasms are fun and their wives had no power to enforce their own views on the subject. Soldiers looted cities because they wanted nice things and having moral compunctions would make all the other soldiers think you were weak and unmanly.
Today, it is instead annoying and inconvenient to own slaves or rape your wife or loot cities, so people behave better. Probably men in 1800 and today are equally likely to view their wives as moral patients. But today, if you try to rape your wife, she is very likely to use her independent income to support herself as she moves out, divorces you, and reports you to the police. So the standards for male behavior are much higher. Many men who would have raped their wives in 1800 are repulsed by committing rape and instead stick to posting bad takes on r/deadbedrooms.
But even when people legitimately come to reason differently about morality, “moral circle expansion” unhelpfully conflates at least four different changes, most of which are not reasonably describable as circle expansion.
Allow me to be economicsbrained for a moment. The reason I’m nice to my friends is not primarily a disinterested concern for their well-being. It’s that helping my friends benefits me in the long run. If I listen to my friends talk about their feelings, my friends will listen to me talk about my feelings. If I help my friends when their lives fall apart, my friends will help me when my life falls apart. If I matchmake my friends, they will invite me to more fun parties.
This isn’t straightforward trade, because friendship has an insurance function. If I allow a homeless friend to sleep on my couch indefinitely, it’s unlikely that they’ll ever be able to pay me back directly. Instead, I know that, if I had been the one to end up homeless, the friend would allow me to sleep on their couch indefinitely. The nature of insurance is that it usually doesn’t pay out.
Friends also face free-rider problems. For example, every individual benefits if they don’t bring a dish to the potluck and get to eat as much as they want without buying anything. But if no one brings anything to the potluck, everyone goes hungry. Everyone benefits if they’re part of a community where people bring food to potlucks, keep an eye on each other’s children, organize dinner parties, and keep common areas tidy.
You can expand this to a community or societal level. Sociologically, people talk about high-trust and low-trust societies; a more accurate way to put it might be high-trustworthiness and low-trustworthiness societies. High-trustworthiness societies are much nicer. You spend less time worrying about being a victim of theft or violence. When you buy goods or services, they’re normally of the advertised quality and rarely outright scams. You don’t have to bribe government officials. You can ask strangers for trivial favors, like directions or keeping an eye on your child while you run to the bathroom. As you might expect, high-trust societies tend to be richer, safer, and freer.
Crucially, high-trustworthiness societies only exist when people basically behave morally. If the entire civil service takes bribes, you can’t possibly arrest all of them and put them all in prison. Legal enforcement only works if the average person will respond to a bribe offer with outrage and offense.
This kind of enlightened self-interest is good. Enlightened self-interest is the only reason we have nice things. Even good utilitarians practice enlightened self-interest in their private lives, which is why you can get effective altruists to bring food to potlucks instead of donating the money to the GiveWell All Grants Fund.
But however enlightened your self-interest, it offers no reason to not own slaves. By taking slaves, you have already decided that, instead of getting them to do what you want through reciprocity and fair dealing, you’re getting them to do what you want through violence. Perhaps you will treat them honorably, if you think that’s a good way to get more work out of them. But if you think beating them is a better way to get more work, who’s going to object? The slaves, perhaps, but fortunately you can whip them until their complaints are silenced.
In some situations, enlightened self-interest suggests restraint during war: for example, in medieval Europe, the nobility successfully enforced a norm of holding each other for ransom instead of killing each other, and today respectable countries at least pay lip service to not committing war crimes. But throughout most of history, enlightened self-interest offered no reason not to rape and pillage the enemy. They’re the enemy; by definition, you’re not working together with them to build a society. And they certainly wouldn’t reward you for your restraint by refraining from raping or pillaging you.
In general, enlightened self-interest doesn’t imply caring about people weaker than you, unless there is some chance that you could end up in their position or they have someone powerful who cares about them. It doesn’t imply caring about enemies. And it doesn’t imply caring about people you don’t interact with.
If we’d like to care about those groups of people, we need altruism: the desire to improve the lives of others, not because it makes our own lives better in the long run, but because we care about the others’ well-being for its own sake.
If I buy a malaria net for an African child, the African child will never pay me back. She’ll never know who helped her; even if she did, she’s poor enough and far enough away that she has no opportunity to improve my life. I can’t claim that donations are insuring me against the possibility of having been born an African child without being irritatingly philosophical. The only reason to donate to the Against Malaria Foundation is that I prefer children not die.
Most people experience a combination of altruism and enlightened self-interest. I am a much better trade and insurance partner if I care for my friends for their own sake, instead of being all economicsbrained about it. People put a lot of effort into identifying fair-weather friends; if you’re too explicit about trade and insurance, you’ll rapidly find yourself losing all your friends entirely.
Similarly, I think a lot of people who specifically want to help people in their city, subculture, or country are experiencing both altruism and enlightened self-interest. I want Oaklanders, Americans, queer people, and effective altruists to be happy, because they’re people and I want people to be happy. I’m also aware that I’m better off if Oakland, America, the queer community, and the effective altruism community are nice places to be. And on some level (perhaps as a holdover from when communities were much smaller and norms were more easily enforced) I expect that, if I use my surplus resources to make Oaklanders, Americans, queers, or effective altruists better off, this will create a norm where other Oaklanders, Americans, queers, and effective altruists with surplus resources use their resources to make me better off.
I think when most people say “moral circle expansion”, the thing they usually mean is feeling more altruism for a given level of enlightened self-interest.1 However, even if two people feel altruistic urges of the same strength, one can easily have a more “expanded moral circle” than the other.
The raw altruistic urge is easily misled. It’s easy to care about specific person who is right in front of you; it’s harder to care about large groups of people or with people who are far away. Our altruistic emotions do poorly with complicated problems, cost-benefit analyses, and counterfactual reasoning. Most people find it easier to care about people who are similar to them, attractive people, and people who aren’t scary. Most of all, as the writer Annie Dillard once said:
There are 1,198,500,000 people alive now in China. To get a feel for what this means, simply take yourself—in all your singularity, importance, complexity, and love—and multiply by 1,198,500,000. See? Nothing to it.
One thing people can mean by “moral circle expansion” is taking the basic altruistic impulse and making it rigorous. You can’t, actually, multiply the caring you have about one malnourished child by forty-two million, for all 42 million malnourished children in the world. But that’s not a fact about the world or about ethics; that’s a fact about your cognitive limitations.
So “moral circle expansion” can refer to the process of setting aside the quirks and biases of our empathy. I care more about suffering people who are similar to me, but it seems to me that, in some important way, all suffering people are equally deserving of care. I can’t multiply the distress I feel about a malnourished child by 42 million, but it seems to me that there being 42 million of them is, in fact, 42 million times worse.
This second meaning of “moral circle expansion” is applying the words of one of the greatest ethical philosophers of the 20th century: a person’s a person, no matter how small.
But people often mean something by “moral circle expansion” other than applying the wise words of Horton the Elephant. Everyone agrees that all humans are sentient. If you ask a Nazi “do Jews have emotions?” or a raping and pillaging soldier “can the enemy experience happiness?” or a slaveowner “are slaves capable of suffering?”, the Nazi or soldier or slaveowner will give the correct answer. They just don’t agree that someone having feelings is a reason not to hurt them.
But in other cases we face what Jonathan Birch calls “the edge of sentience”, or what you might call “the edge of moral patienthood”: people in minimally conscious states, newborn babies, fetuses, pets, farmed animals, invertebrates, and AI systems. In these cases, thoughtful, informed people acting in good faith can legitimately dispute whether the beings are moral patients.
When people say “moral circle expansion,” they can mean having particular beliefs about the edges of moral patienthood. For example, Alice might have an “expanded moral circle” because she has a theory of consciousness in which there is something it is like to be a bat; conversely, Bob might have a “contracted moral circle” because his theory of consciousness says that bats are automatons. Similarly, Alice might have an “expanded moral circle” because she cares about all beings that can suffer, while Bob might have a “contracted moral circle” because he only cares about beings that are capable of accepting the social contract. For that matter, Bob might “expand his moral circle” by reading more about ethology and deciding that chimpanzees are at least as capable of accepting the moral contract as five-year-olds.
In these cases, expanding your moral circle isn’t an unalloyed good. We wouldn’t think it’s wise to expand your moral circle to include tomato plants, sourdough starters, rocks, or thermostats.
However, even when people agree on their moral theories and theories of consciousness, they can still treat the edges of moral patienthood differently. Some people believe that, as long as it hasn’t been conclusively shown that a being is sentient, you can treat them as you like. In reality, we often have to make decisions under conditions of uncertainty.
It is much worse to torture an animal than it is to be mildly inconvenienced by injecting a painkiller first; therefore, even if you think there’s only a 60% chance an animal is sentient, it is wise to inject the painkiller. When precautions are cheap enough, it can be justified to take them even if you’re pretty sure the being isn’t conscious. Even if you think there’s only a 1% chance ants are conscious, you shouldn’t burn ants with your magnifying glass, because it is easy to find equally entertaining things to do with a magnifying glass that don’t risk inflicting on any being a torturous death.
In this case, we’re not changing our definition of moral patienthood; instead, we’re changing how we reason about situations in which we’re not sure whether a being is a moral patient or not. Because it’s usually much worse to mistreat a moral patient than to treat a moral nonpatient with kindness, taking uncertainty into account means acting like you care about more beings—even if which beings you care about hasn’t changed at all.
Sometimes people say “moral circle expansion” to refer to treating more beings as moral patients. I think this is basically confused. Much of the time, when people seem to treat more beings as moral patients, it has nothing to do with any change in ethical philosophy; instead, they have different factual beliefs, or behaving morally might have become more convenient. Even when people’s moral beliefs have changed, they can change in at least four different ways:
1. Caring for other beings for their own sake, instead of exercising enlightened self-interest.
2. Expressing that care in a rational way, rather than a way driven by the biases and limitations of our instinctive altruistic impulses.
3. Changing beliefs about which beings are moral patients.
4. Reasoning under uncertainty about whether other beings are moral patients, instead of dividing them into a “definitely moral patient”/”definitely not moral patient” binary.
Expanding moral circles are a nice idea for poetry, songs, and rituals. I have no intention of tossing out The Circle. However, I think the concept is philosophically confused in a way that makes it a bad explanation of moral progress. And, while you might want to convince people to be more altruistic or to go about their altruism more intelligently, very little good is done by trying to get people to “expand their moral circles.”
You can also increase your altruism : enlightened self-interest ratio by decreasing how much enlightened self-interest you have, but don’t do that.

I think this post has a good typology of different kinds of moral circle expansion, but it didn't really convince me that the moral circle itself is a bad metaphor. If you define a person's moral circle as "the set of all beings that they treat with a non-negligible amount of altruistic concern", which I think is pretty close to the way it's used casually, then the four kinds of moral-belief-change can all expand this circle. You shouldn't treat "moral circle expansion" as synonymous with any one of them, but I think it's still useful to have a category that includes all of them.
Even in the ancient world, sometimes enlightened self-interest would tell you not to pillage your enemy's city (much) if you could do better by collecting taxes or tribute from it over time. But if you're just going to go back home, self-interest would still say that you might as well steal what you can while the stealing is good. (It's the whole "roving bandit vs stationary bandit" thing.)
And before cities there wasn't very much around to pillage, except possibly livestock (or slaves)...