I am continually haunted by terrible thought experiments. Everywhere I go people are proposing thought experiments that involve people-seeds landing on the carpet or filling the observable universe with tightly packed shrimps1 or something wacky about pills and then getting very angry at each other about the results. Sometimes thought experiments are even so terrible that they wind up justifying the U.S. government's use of torture.
I don't think this process is very enlightening about moral philosophy, human reasoning, or which charities we should donate to.
Previously, I had no more principled response to this than making grumpy noises and pondering whether it would be so bad really if we closed down all the philosophy departments and made all the philosophers get real jobs. But the book Weighing Animal Welfare, in chapter 4, has. Since absolutely no one is going to read Weighing Animal Welfare—sadly, thought experiments about very tightly packed shrimps are much more popular than well-researched and empirically grounded views on animal sentience—I feel very little guilt about stealing their thoughts for the consumption of the general public.
Weighing Animal Welfare provides the first example I've ever seen of a meta-thought-experiment.
Alien Annie: comes from the planet Xaxatari and is an agent, but isn’t conscious in any way that humans can recognize. She has lots of capabilities and can live up to trillions of years. If she does, she will destroy all life on earth at some time in the distant future. Would Annie’s continued life be as good as a human’s continued life?
The question brought up by Alien Annie is: does a thought experiment this confusing, this completely disconnected from everyday life, incorporating this many concepts which are barely related to each other, say anything about anything philosophically interesting?
I, and Weighing Animal Welfare, say no. Weighing Animal Welfare presents a four-part test which explains exactly how Alien Annie went wrong.
First, a good thought experiment is easy to understand. It doesn't involve unfamiliar concepts or unnecessary details. It doesn't require expert knowledge. It doesn't postulate something that contradicts normal experience. It doesn't involve things we know people have trouble reasoning about, such as very large or very small numbers or probabilities. You might object that the purpose of your thought experiment is to show that people have trouble reasoning about very large or very small numbers or probabilities, but we already know that and no further demonstration is necessary.
People have to be able to understand a thought experiment to be able to give an enlightening answer to it. If a thought experiment is confusing or hard to reason about, then people's answers won't make a great deal of sense. That isn't their fault for being bad at philosophy; it's your fault for being bad at thought experiments.
Two of the bad thought experiments I laid out earlier fail this criterion. The Observable Universe of Tightly Packed Suffering Shrimp thought experiment requires having an intuitive sense of how big a googol is. Judith Thomson's people-seeds thought experiment requires pretending that embryos are pollen and can land on one's carpet and grow like plants—which isn't even how plants work! Plants don't grow on carpet!
On the other hand, consider Peter Singer’s famous drowning child thought experiment. It is easy to imagine a child drowning in a pond in front of you: no doubt many people have entertained themselves in boring situations with fantasies of such heroic rescues. The situation contains nothing confusing or counterintuitive.
Second, a good thought experiment isolates one relevant part of our intuitions by ruling out everything else that might affect our judgment. For this reason, a thought experiment may need to specify the situation very carefully. For example, it may be necessary to specify that you are a trolley engineer and know that the fat man definitely weighs enough to stop the trolley, or otherwise the problem will be about moral reasoning under uncertainty and not about the act/omission distinction.
The blue pill/red pill thought experiment fails this test very badly. Not only am I unclear what point the thought experiment is supposed to be making, it's unclear to me that the blue pill/red pill thought experiment is making any point at all. Probably this is because it was made up by a twelve-year-old.
The Observable Universe of Tightly Packed Suffering Shrimp thought experiment also fails this prong, though more subtly. The thought experiment is supposed to be about aggregating large numbers of small harms. But by talking about shrimps, the Observable Universe of Tightly Packed Suffering Shrimp thought experiment brings in animal sentience and the moral patienthood of animals. Many people give the answer "I don't care about an observable universe of tightly packed suffering shrimp because I don't care about shrimp at all, regardless of how vast a space they take up or how tightly they're packed."
Third, a good thought experiment isn't framed in a way that lends itself to motivated reasoning, cognitive biases, or any unnecessary culture war consideration.
For example, sometimes people try to prove that there's such a thing as objective right and wrong by describing, in detail, the process of someone's spouse and children being raped and murdered in front of them. I think this is worse than saying the sentence "you say that there isn't such a thing as objective right or wrong, but it really seems like raping and murdering children is objectively wrong." The description doesn't actually make the point stronger; it just stirs up people's emotions to make them more likely to agree with you. Saying "yes, I continue to believe that in a certain philosophical sense this isn't 'morally wrong'" feels like saying it's okay for the people you love to be murdered, or that you don't want their murderer to be punished, or that you feel as strongly about this as you do about (say) not eating ice cream. You are slanting the thought experiment to get the answer you want.
Or consider the ticking time bomb thought experiment: should you commit torture if you have a terrorist in custody who is going to tell you about the location of a bomb that's about to go off? Again, this elicits a lot of emotions: desire for revenge on terrorists who murder innocent people, fear of dying in a terrorist attack, etc. I also suspect it invokes the fallacy of generalization from fictional evidence, because it's such an action-movie premise that people are subconsciously answering with what they think an action movie hero would do.
Unnecessary culture war is a subtle point. Obviously, a lot of culture war issues—abortion, sexual ethics, the definition of a country—are philosophically interesting. It makes sense that we've developed thought experiments to help us think about them. But often thought experiments shed more heat than light on the culture war issues they're supposed to illuminate.
The original Politics is the Mind-killer post gave a very good example of an unnecessarily culture-war-y thought experiment:
In artificial intelligence, and particularly in the domain of nonmonotonic reasoning, there’s a standard problem: “All Quakers are pacifists. All Republicans are not pacifists. Nixon is a Quaker and a Republican. Is Nixon a pacifist?”
What on Earth was the point of choosing this as an example? To rouse the political emotions of the readers and distract them from the main question? To make Republicans feel unwelcome in courses on artificial intelligence and discourage them from entering the field?
This is an unambiguous example because Nixon has nothing to do with nonmonotonic reasoning.2
A more controversial example is something like this post about Zionism; I'm not excerpting the thought experiment because it is rather disturbing. The post makes interesting points about moral responsibility, moral desert, blackmail, and act vs. rule consequentialism; it chooses to do so by naming its position "Zionism." This is deeply unenlightening as a matter of moral philosophy, because anyone who is anti-Zionist is obviously going to line themselves up on the side labeled "anti-Zionist" regardless of their position on rule consequentialism.
But, obviously, Scott Aaronson in this post isn't really attempting to do elevated moral philosophy and inexplicably pulling out a contested political issue. He wants to argue about Israel/Palestine, and is using moral philosophy as a tool to do so.
I think we should all do less culture warring, but ultimately I can't even stop myself from culture warring, much less all the rest of you. So at the very least I think you should have it clear in your mind whether you are doing moral philosophy or culture war, and if it's the second thing you shouldn't expect your thought experiments to shed any useful light on broader principles.
The drowning child thought experiment follows this criterion. "It is bad when children drown in front of you" is very possibly the most apolitical position it is possible to have, agreed upon by MAGAs and Blueskyists alike. No cognitive biases are involved, as far as I can tell. And the drowning child thought experiment invokes the opposite of motivated reasoning. Everyone agrees that you should rescue drowning children; probably everyone would rescue drowning children if they had the opportunity to do so. The drowning child thought experiment uses this fact to cut through our motivated reasoning about how we have no responsibility to help children who live far away from us.
Fourth, a good thought experiment is not about an empirically verifiable fact about the world.
If a fact about the world is empirically verifiable, you shouldn't do a thought experiment; you should do a real experiment. You shouldn't do a thought experiment to figure out which form of government seems like it would be highest-quality and most respectful of human freedom. You should look at present data and the testimony of history to find out what the actual track records of actual governments are.
On the other hand, the drowning child thought experiment shows that, in some situations, we have a duty to help others—not just a duty to avoid harming them. This is a non-empirical claim. You can't run a randomized controlled trial or do comparative historical research to find out whether people have a duty to help others. In principle, you can try living out each position and see which has outcomes you like better, but the effectiveness of this way of resolving moral conundrums is itself philosophically controversial.
Often, people mix empirical and non-empirical questions in their thought experiments. For example, you might do a thought experiment to find out whether people should care about bees. But "should we care about bees?" is actually two different questions:
"What traits do bees have?"
"What traits does a being have to have before we care about it?"
Thought experiments can help us figure out what kinds of being we care about, but they are absolute garbage at figuring out what bees are like. No matter how hard you contemplate, you're not going to be able to figure out from first principles whether (say) bees experience cognitive biases. That question is a subject for entomological research, not for philosophy.
Similarly, the ticking time bomb thought experiment is often used to justify torture. But for the ticking time bomb experiment to justify any torture in the real world, you'd have to first answer:
"How often do people typically tell the truth under torture?"
"How often is it true, outside of action movies and Call of Duty, that a terrorist attack is imminently about to happen and can be prevented but only if you torture the perpetrator, whom you conveniently have in custody?"
The case against torture has nothing to do with whether you can, in principle, torture one person to save the lives of many more people. It has to do with the fact that the answers to those questions are, respectively:
"The things people say under torture are basically uncorrelated with the truth, because they say anything that they think will get the pain to stop."
"This never happens."
In conclusion, before presenting a thought experiment, ask yourself these questions:
1. Is it easy to understand?
2. Does it rule out irrelevant aspects of our intuitions?
3. Does it avoid provoking cognitive biases, motivated reasoning, or unnecessary culture warring?
4. Am I using thought experiments for what thought experiments are good for, not for empirical truths?
If the answer to all four questions is "yes", you may do a thought experiment and avoid the scourge of Alien Annie.
I am dunking on Moralla a lot in this post so I want to say explicitly that she seems like a very nice person and she didn't deserve at all the amount of hate she got for her post.
One reader objected that this wasn't very culture-war, but I think it was far more culture-war forty or fifty years ago when the thought experiment was coined and the Cold War was more of a live issue.
"Filling the universe with shrimp seems plausible", said Tom, superficially.
I like most of this, but I'm conflicted about 3.
On the one hand, I take your point that loading your thought experiment down with emotionally impactful trappings is likely to make people worse at moral reasoning, not better.
On the other hand, it seems like the whole point of thought experiments like this is that there are cases where people would endorse some kind of "you should always X" claim, but then in sufficiently extreme cases realize "okay, maybe you shouldn't X if the consequences are bad enough." I think revealing this with a thought experiment can reveal something real about people's moral intuitions. The axe murderer thought experiment from the linked post is an example of this.
I don't have a good rule about when piling on emotional consequences makes for a good thought experiment vs. a bad one. But I think it's gotta be more complicated than "never do this." Anybody else have thoughts on this?