59 Comments
User's avatar
Tim's avatar

"Filling the universe with shrimp seems plausible", said Tom, superficially.

Expand full comment
C_B's avatar

I like most of this, but I'm conflicted about 3.

On the one hand, I take your point that loading your thought experiment down with emotionally impactful trappings is likely to make people worse at moral reasoning, not better.

On the other hand, it seems like the whole point of thought experiments like this is that there are cases where people would endorse some kind of "you should always X" claim, but then in sufficiently extreme cases realize "okay, maybe you shouldn't X if the consequences are bad enough." I think revealing this with a thought experiment can reveal something real about people's moral intuitions. The axe murderer thought experiment from the linked post is an example of this.

I don't have a good rule about when piling on emotional consequences makes for a good thought experiment vs. a bad one. But I think it's gotta be more complicated than "never do this." Anybody else have thoughts on this?

Expand full comment
AV's avatar

Some thoughts

1. No backwards reasoning from extreme scenarios. "Torture might be justified if there was a ticking time bomb and I was 100% sure it would work" does not justify torture in less extreme circumstances. You can reason from a drowning child nearby to a starving child far away because the situations are genuinely analogous. You *can't* use the drowning child to argue that we're morally obligated to give toddlers cookies any time they want one (reducing net toddler suffering!).

2. Minimum viable thought experiment. The Trolley problem doesn't *need* graphic descriptions of what a trolley crash does to human bodies to be effective. Adding these details would make the problem less effective as a teaching tool. (Honestly, the fat man is on thin ice. It makes the scenario more memorable, but also runs into some points of cultural bias that imo distract from the main point.)

3. This bullet originally said, "You should only use highly emotional thought experiments if the consequences of avoiding them are sufficiently bad :P" but now that I think about it probably the opposite is true. If you feel that you're in an extremely high-stakes moral philosophy situation, chances are you're really emotionally wrapped up in it yourself. People who are very emotionally invested in something tend to write unconvincing appeals to emotion - it's hard to model someone who's really different from you! If you're tempted to write a borderline-pornographic thought experiment, that's probably a sign that you should take a step back and consider things from your audience's point of view.

Expand full comment
Taymon A. Beal's avatar

The reason the fat man has to be fat is not to make the scenario more memorable, but to eliminate "jump onto the track yourself" as a possible solution.

Expand full comment
AV's avatar

Yeah, I don't buy that. Compare: "The reason the man in the big, heavy wheelchair has to be in a wheelchair is to eliminate the possibility of jumping onto the track yourself". Also, believe it or not, fat people do occasionally do moral philosophy.

I said that the fat man is on thin ice because it's not *that* distracting as part of the thought experiment. Still, choosing a category with less social baggage (body builder?) would improve the moral clarity of the thought experiment.

Expand full comment
harry's avatar

It's also to make it seem more plausible; a single average-weight person in the way doesn't seem like it will stop an out-of-control trolley to me. That being said, I don't even think a very fat person is going to stop a trolley moving at person-killing speeds, though I'm not a qualified trolley engineer.

Expand full comment
Philip's avatar

> Similarly, the ticking time bomb thought experiment is often used to justify torture.

The ticking time bomb experiment is used to establish whether or not torture is *ever* justified. A thought experiment is useful in order to abstract from the details of a particular case. It’s a feature not a bug.

Expand full comment
Taymon A. Beal's avatar

I think there is a sorta separate argument here, that you should not argue that torture can ever theoretically be justified even if you believe that it can be, because bad actors will strip all the nuance out of your argument and round it to "torture is definitely always fine and never causes problems" and then use it to justify torturing random people for no reason. I don't know what I think about this consideration, but conflating it with questions about what makes thought experiments work or not work for figuring things out seems like maybe an error in the post.

Expand full comment
titotal's avatar

In thought experiment land, you can always come up with a situation where any horrible act is justified. Just say that aliens come the earth and threaten to slowly and painfully kill all humans unless you do [insert most horrible act you can think of here]. Now you have established that [most horrible act you can think of] is sometimes justified. This is not useful information!

Expand full comment
Jerdle's avatar

The Zionism one has a major logical issue. It's actually two questions:

1. Is it okay to pull the lever in that situation?

2. Is Zionism equivalent to that situation?

And from what I've seen from anti-Zionists, their opposition tends to be to 2, rather than 1. In fact, many of them saw October 7 as pulling the lever.

Now I am a Zionist, and do agree with Scott on both questions, but I am not remotely happy with him just assuming the answer to 2. It's like the shrimp thing, where "shrimp are sentient and morally valuable" is assumed, despite being the crux for many people.

Expand full comment
lin's avatar

To push this a step farther, once both sides perceive themselves as being in that situation, each side's perception creates the other side's reality. if your enemy is yelling "I'm going to kill your children to save mine", of course it follows that you had better fight him for the lever so you can kill _his_ children to save _yours_. What difference does it make to you whether it was or wasn't your ancestors or your government who tied both of your children to the tracks in the first place?

Expand full comment
Jerdle's avatar

And that's why war is hell, especially when there's barely enough room for one state in the region.

Expand full comment
Philippe Saner's avatar

I'm definitely objecting to #2; the premise seems to be that Palestinians committed the Holocaust.

Which, well, they didn't.

Expand full comment
Jerdle's avatar

The thing being described isn't the Holocaust. It's October 7.

Expand full comment
Philippe Saner's avatar

It's a really bad description, then. Because October 7th happened long after Israel started doing the stuff anti-Zionists object to. It can't be used justify things that happened before it.

And October 7th killed a little over a thousand people. Can't reasonably describe it as having "murdered most of your family". But the Holocaust actually did operate on that scale.

Expand full comment
Doug S.'s avatar

A better analogy might be the attack on Pearl Harbor. Japan killed 2403 Americans and wounded about 1178. In the ensuing war, the number of Japanese people killed has been estimated as between 2,500,000 and 3,100,000, with civilian casualties being between 550,000 to 800,000.

Was the US response to Pearl Harbor disproportionate?

Expand full comment
Philippe Saner's avatar

If we're using this as an analogy for the conflict in Palestine, then we run into the time travel problem again. Because the stuff anti-Zionists object to started long before October 7th and would be still be happening without it. The US was, critically, not trying to conquer Japan prior to the Pearl Harbor attack.

By any reasonable standard, Israel's settlement program is an act of war. There's nothing more war-like than a bunch of guys with guns showing up to seize your land. And the scale of the modern settlement program pales in comparison to the Nakba.

If you actually want my opinion on WWII, then I think the American response was mostly but not entirely proportionate. Overall America had to fight, but there were some lines crossed.

Until the recent conflict forced me to actually learn some history, I assumed that the situation in Israel was similar to America in WWII. But when I looked at the words and actions of the Zionist movement over the past century, it became obvious that wiping out the Palestinians and stealing all their stuff wasn't a response to anything. It was just the plan. It would've been the plan even if the Palestinians had been pacifist saints - which they weren't and aren't, obviously.

To be fair to the Zionists, though, their policies are not that different from the ones that built my country (Canada).

Expand full comment
Doug S.'s avatar

My view of things is that the entire situation has degenerated into a classic case of bad fighting against worse... I could say what I think would have to happen for peace to be possible, but my words aren't going to have yhe slightest impact on what actually happens, so what's the point? It's no different than being the guy yelling at the characters in a horror move not to do the stupid thing that's going to get them killed. :(

Expand full comment
Taymon A. Beal's avatar

I think Aaronson's implicit argument is that a lot of pro-Palestinian leftists are using simplistic modes of moral reasoning, in particular "the more materially oppressed side is always in the right", that get the wrong answer to 1. Of course, actually assessing whether they're doing that would also require a bunch of empirical work that the post doesn't do.

Expand full comment
Ali Afroz's avatar

I actually think his whole argument Relys on the intuition that if you are the materially oppressed party, who has lost their family, you can do otherwise immoral stuff. Obviously, part of his intuition is also that since you lost your family because of the bad actor, you can do bad stuff to him specifically even if innocents get caught in the crossfire,, but that kind of reasoning where one communities wrongdoing justifies that community being mistreated including innocent members who are only the descendants of the bad actors is a common failure mode of left-wing reasoning that he has fallen into. I think his reasoning is actually very similar to extreme social justice people who actively think that groups like white people can be mistreated for the good of minorities.

Expand full comment
Ali Afroz's avatar

I agree with this, although in fairness there are also lots of people like me who would actively reject one, and I think it’s not at all obvious that one is correct, and it’s very sensitive to framing effects because hurting innocents especially children is fine if their parents or other relatives did something bad and your own relatives benefit from hurting the innocents even if the benefit is much smaller than the cost to the innocents is actually a pretty wild take. So depending, on how you frame it, you will get different answers and it’s a very controversial position to hold to put it mildly. The point is not that it’s definitely wrong, but it’s not obviously correct. Of course like you said, there are lots of people on the opposite side who accept one and think it justifies their own side doing bad things which is not very surprising because in war, it’s common for both sides to come up with reasons why actually greater brutality from their side is justified. Which to be clear does not necessarily mean that the reasons are incorrect.

Expand full comment
Eschatron9000's avatar

I don't think the thought experiment says anything about Zionism, because of (2), but I think (1) makes it an interesting experiment anyway.

My reaction was "so sad, the poor man has brainworms", but apparently people here agree with him?! "If someone does enough bad stuff, you have not only a right, but an obligation, to take it out on some innocent kids" is monstrous. "If a serial child-killer ties his own kids to train tracks, killing them is totally an effective deterrent! It will get that guy to knock it off, or deter some future guys from going after your kids" is baffling. What?

Expand full comment
Taymon A. Beal's avatar

I don't want to get into object-level Israel–Palestine discourse here, but I don't think anyone is saying that? In the thought experiment, you pull the lever in order to save your own child, not for deterrent reasons.

Expand full comment
Eschatron9000's avatar

Eh I think the disagreement mostly comes down to "does it actually save your child permanently, or just buy her a few weeks before the guy does it again?", which isn't a very interesting detail of the thought experiment.

Killing five innocent children to save your own child is pretty fucked up, and most legal systems would punish you severely for it, but presumably anyone who buys "I have an obligation to save my own child even if it requires killing other children" also buys "…and then going to jail for it".

Expand full comment
Ali Afroz's avatar

Deterrence can be an effective strategy to protect your own children, so in fact, this is an implication of his argument. If you demonstrate you are willing to retaliate by killing his own family members and children, then the killer might not be so willing to go after your remaining children.

Expand full comment
harry's avatar

I'm actually in support of question 1 though (that is, I would hope to not pull the lever). My reasoning is as follows: my thought-experiment daughter is explicitly my last child, meaning that after the grisly events of the thought-experiment, there should be no more violence except possibly against myself, and that still only adds up to two people. In addition, even if I were to pull the lever, that would still leave me as a potential victim.

But I'm explicitly in the "no moral desert" position and Scott Aaronson has stated that he isn't.

In the real world, I'm very unsure that I could act in such a calculating way because we're so strongly attached to our own children. I've never had a child and the idea still feels repulsive to me. But that doesn't change my higher-level reasoning.

Expand full comment
Ali Afroz's avatar

I think killing those children to save your own child is not blamedworthy because it would require almost saintly levels of moral character not to do that. It’s like if you were born in a society where beating and rapeing your slaves is normal. If you do a normal or less than normal level of that, you cannot blamed because it is just not humanly feasible for most people to recognise that it’s bad, but it is still a suboptimal thing to do, and it would be extremely admirable if you nevertheless did the right thing and did not hurt your slaves. Same thing here, you can’t expect most people not to save their own child at the cost of children of someone they hate, but having the good sense to recognise that this is actually hurting innocents for a much smaller benefit to yourself and your family, and there for not doing it is extremely praiseworthy.

Expand full comment
Torches Together's avatar

"The things people say under torture are basically uncorrelated with the truth, because they say anything that they think will get the pain to stop."

Before I get very nerd sniped, do you actually think this is true?

Or would you acknowledge that there are contexts where torture does actually extract information effectively?

Expand full comment
Ozy Brennan's avatar

Torture can be effective at producing accurate information if the victim definitely knows it and the information can be immediately verified. For example, if you break someone's fingers until they tell you their ATM PIN, this will probably work.

Torture is *ineffective* if the victim might not know the information-- for example, if you have captured a terrorist, they might not know about battle plans or the location of their leader. In this situation, the victim will make shit up in order to get the pain to stop, which means you get a lot of false positives. It is also ineffective if you can't verify the information, because some people will make shit up because they're mad at you for torturing them, and because it's easy to subconsciously slant the situation so you get the answer you want even if it's false. (This is, for example, how a lot of early modern blood libel tortures worked.)

Expand full comment
Doug S.'s avatar

If the situation is serious enough to justify torture, it's also serious enough to break the law and throw yourself on the mercy of the court afterwards.

Expand full comment
Doug S.'s avatar

That being said, if you want to put a character up against a torturer in a TTRPG or fictional context, here's a way to adjust the incentives the victim faces:

Have the torturer start by asking a bunch of questions that he already knows the answer to and punish lies. After a while, sprinkle in some questions that the torturer doesn't know the answer to - the victim might not be able to tell which questions the torturer already knows the answer to and which he doesn't, so the victim's incentive (in terms of avoiding more torture) would be to answer everything as honestly as possible.

If the torturer thinks his victim is particularly rational, the torturer can respond to an "I don't know" by flipping a coin and only punishing that answer if it lands tails; making up a lie that gets caught gives a 100% chance of more torture, while admitting "I don't know" would have a 50% chance.

Expand full comment
Torches Together's avatar

Yeah, that was the angle I was going to take - ATM theft (or "codes to the safe") is a typical case where it works well, and is relatively common in some countries. Seems there's probably a fairly simple algorithm where factors like immediate verifiability, high tolerance for false positives, time pressure etc. determine whether torture will yield valuable information.

I couldn't get hold of much empirical data on what works in what circumstances, and how gradations of torture work (from deliberately causing psychological distress, to brutal medieval stuff), but a fairly basic understanding of incentives is sufficient to dismiss strong claims that "torture doesn't work".

Expand full comment
Ozy Brennan's avatar

A basic understanding of incentives shows you that torture doesn't work for information gathering, outside of a very narrow set of situations that ~never hold for governments! (Criminals need to torture people for their PINs because they're weaker than governments, which can for example threaten you with prison time.) Pro-torture reasoning generally seems to come from people who have trouble modeling the incentives in situations like "this person doesn't know the information" and "the information is information I don't like, so I'll dismiss it as false."

In what circumstance exactly do you have a high tolerance for false positives and also a lot of time pressure?

Expand full comment
Ali Afroz's avatar

Right, but in the thought experiment, the conditions for torture to work absolutely are satisfied because the terrorist knows where the bomb is and you know they know this and you can easily verify whether they told you the correct location. Most of this is already part of the stipulated experiment specifically because the assumption is that torture likely works in this situation.

Expand full comment
Ozy Brennan's avatar

Yes, but you can't generalize from that thought experiment to the real world, where those conditions essentially never hold for governmental torture.

Expand full comment
Doug S.'s avatar

Some thought experiments remind me of deliberately horrible questions, along the lines of "Which do you prefer: horrible thing X or horrible thing Y?"

Like in this Dilbert strip: https://ic.pics.livejournal.com/icon_uk/11800056/855574/855574_original.gif

Expand full comment
Jasnah Kholin's avatar

I disagree with many parts of the post, but i think it should be better to start with the beginning - what is the goal of thought experiments? without goal, one cannot evaluate the experiment.

for example, my opinion about The Drowning Chile is the goal of this experiment is to (and i'm going to cite from Self-Integrity and the Drowning Child):

"this parable was setting up two pieces of yourself at odds, so that you could not be both at once, and arranging for one of them to hammer down the other in a way that would leave it feeling small and injured and unable to speak in its own defense."

on the other hand, i did the opposite- asked myself honestly why i give such different answer, and then changed the experiment to see if those changes really change my answers.

most thought experiments i encountered was used as weapons, intended to convince people without regard to truth or their own best judgement. judge them as if person tried to do something else instead and just was confused is weird error. you judge car and computer by different criteria, and don't thing computer is bad because it can't bring you from place to place.

the problem with most thought experiments is not in doing something badly, but in choosing bad thing to do, or by lying to yourself what you are trying to do and do muddled thing that do more then one thing badly.

the error is not so much on the implementation stage. but at the aiming, at deciding what to do.

Expand full comment
Topher Brennan's avatar

I have always liked the Nixon example and have never understood the issue with it. Does it feel like an insinuation that Nixon isn't a "real" Quaker? Or is the idea that while Nixon's supporters don't actually think it's bad that he's not a pacifist, and in fact probably feel an actual pacifist as president would cause problems, they might feel embarrassment at the thought that it's so incredibly obvious Nixon is not a pacifist?

Expand full comment
Ali Afroz's avatar

The idea is that there was absolutely no reason to bring him into it because politics will cause bad reasoning where a less emotionally charged, example would not

Expand full comment
Edmund's avatar

Yes, but I think Topher is asking what makes the Nixon example emotionally charged at all. While using "All humans are vertebrates, Donald Trump is a human, therefore Donald Trump is a vertebrate" would be pretty strange, it's not strange in a direction where people's emotional associations with Donald Trump are going to obfuscate the logic, and plausibly the Nixon thing is more like that than like "but what if an axe murderer assaulted your family in front of you".

Expand full comment
Ali Afroz's avatar

I think this is because being a vertebrate is not emotionally charged, and in any case, everyone who knows the word knows all humans belong to this category with zero ambiguity. Meanwhile, pacifism and war is a deeply political issue and one Nixon was already involved in given things like the Vietnam war. So, it’s not at all surprising to me that your example is strange but unproblematic, whereas the Nixon example is problematic but unsurprising precisely because of the political angle. It just doesn’t make sense to use Nixon example given how strange it is if you’re only objective is teaching, but it’s not at all strange if your main goal is to criticise him undercover of teaching.

Expand full comment
John Quiggin's avatar

I've been banging this drum forever, and agree entirely. You're right that Singer's example shows that thought experiments can be done well.

Expand full comment
Lillian's avatar

One thing that thought experiments sometimes miss is moral calibration. The fat man variation of the trolley problem is to me invalid, even if you posit that I know for a fact that the fat man will stop the trolley, and that I will successfully push him onto its path rather than cause him to fall to his death without landing on the tracks because I screwed up or he fought back. The reason being that in real life I cannot possibly achieve that level of certainty, not even if I have relevant expertise, and my moral intuitions are calibrated to account for that. You cannot simply assert that you have removed uncertainty from the problem, because my moral sense will not believe you since I have a built in expectation that situations like this will always involve uncertainty, and anyone saying otherwise is either delusional or trying to swindle me. Moreover, I cannot simply recalibrate these expectations at my convenience because that would defeat the point of having a moral sense. Consequently the thought experiment cannot reveal what it is trying to reveal because its structure causes too many other considerations to become irretrievably bound up within it.

Expand full comment
Peasy's avatar

>What on Earth was the point of choosing this as an example? To rouse the political emotions of the readers and distract them from the main question? To make Republicans feel unwelcome in courses on artificial intelligence and discourage them from entering the field?

This makes no sense. Who on earth in the twenty-first century has strong emotions about Richard Nixon? Who in this galaxy in the twenty-first century considers Richard Nixon emblematic of Republicans? Who in all possible universes would take the simple assertion "A specific Republican politician apparently did not observe his religion's tenets to the letter, just like almost every other religious person on earth" as an attack on Republicans?

Expand full comment
User was temporarily suspended for this comment. Show
Expand full comment
Ozy Brennan's avatar

User suspended for 30 days for posting inflammatory culture war stuff in the comments of a mostly unrelated post, and also being a dick about multiple Scotts A.

Expand full comment
Jasnah Kholin's avatar

that is extremely bad faith description of both situations.

Expand full comment
DABM's avatar
Sep 25Edited

What do you disagree with?

Expand full comment
Jasnah Kholin's avatar

well, I remember reading SSC neo-reactionary posts, and came to the conclusion he believe 80%-99% (depends of how to measure) of what they say is wrong. calling that "promoting" is...

well, I can start to describe the errors,there is more then one, but that's the wrong level of meta.

it just not look to me like the output of algorithm that aiming for truth, at all. but an output of an algorithm that optimize for the worse thing you can say about someone while being constrained by avoiding direct lying.

some people thing the reaction to that is bounded distrust. like that: https://thezvi.substack.com/p/how-to-bounded-distrust

I prefer to just mark algorithms like that as untrustworthy, and throw their output to the bin.

(I'm not against conversation, but I start this conversation by assuming you are deeply untrustworthy, and I don't believe good things can come of such conversions, generally speaking. )

Expand full comment
DABM's avatar

If you can describe the errors, why aren't you doing so?

Expand full comment
Jasnah Kholin's avatar

because the meaning of the bad faith assumption, is to expect it will be utter waste of time. I probably would have done that, if the discussion was in my native language. but i reluctant to do something effortful when i expect that the effort will not be reciprocated. enter discussion when i expect to need order of magnitude more effort then the other side is not exactly motivating.

and, see, you answers here confirmed this assumption.

Expand full comment
DABM's avatar

As for the promoting thing, he absolutely did talk about wanting to spread their ideas in private, just look at the linked blog I posted below. Not all of their ideas of course, just the ones he agreed with. I agree he is not in any straightforward sense a reactionary, but I think that does not fully absolve him of responsibility for deliberately trying to signal boost them, and get them more attention.

Expand full comment
Jasnah Kholin's avatar

you noticed what you said didn't contradict what i said, right? if someone 90% wrong, and i spread the 10% that is right "promoting them" it's not the way i will describe the situation, nor do i expect random person who hear that X promote Y to find out X think y is 90% wrong.

Expand full comment
Hoffnung's avatar

promoting people who what now?

Expand full comment
DABM's avatar

It's a long story, but for a (definitely partisan, but information heavy) account see: https://reflectivealtruism.com/2024/10/31/human-biodiversity-part-4-astral-codex-ten/ I should say that Thorstad sees this primarily through the lens of promoting the specific *empirical* claiming that Black people are genetically less intelligent than Whites and Asians. I do not think that claim about the genetics of intelligence is true, but for me the central issue is helping-somewhat deliberately-to bring attention to and promote the Neoreactionaries as a *political* movement, with goals for how the world should be, not just claims about how it is, and associated political attitudes and biases. There are people who I think probably believe the empirical claim about the genetics of intelligence who I don't particularly think it is bad or shameful to be associated with, i.e. Sam Harris or Andrew Sullivan-because they are not hardcore fascists like the NRs.

In any case, the most credible defence of Alexander on this, is that he wasn't really trying to promote Neoreaction as a whole, but only to draw attention to some, as he sees it, uncomfortable but true empirical beliefs they hold, whilst remaining himself committed to liberal democracy and liberal rights for all. (When I talk about liberalism in this context, I mean "the sort of rights and freedoms associated with what the 00s Western mainstream would regard as 'real' democracy, not leftism or a position in favor of markets but socially liberal that sits between the left and conservatism.) People who made that defense-among whom Aaronson as far as I remember was certainly one-laid huge stress (not entirely implausibly) on being prepared to hear out the evidence for ideas even if those ideas were taboo, came from bad people etc. Which, fair enough, most people do indeed err on the side of not enough tolerance for that in my view, and probably I have done too at some point. But the absolute minimum demand for not being a hypocrite if your going to staunchly defend that stand is to also apply it when the offensive claims are about groups you belong too. Aaronson has *wildly* failed that standard in the piece on Israel.

Expand full comment
Michael's avatar

A definite vice of analytic philosophers is making their thought experiments too whimsical. You ideally don’t want a cute little story with quirky characters. Less severe than the culture war stuff but similarly makes it harder to think clearly.

Expand full comment
Taymon A. Beal's avatar

I think this is at least not trivial. Having your writing style be so boring that your readers' eyes slide off the page isn't conducive to clear thinking either.

Expand full comment
Torches Together's avatar

It seems plausible that there are actually far more boring and dry thought experiments in analytic philosophy ("imagine X agent does Y, under a, b, c constraints"), but we only share and remember the whimsical or absurd ones.

Expand full comment
Taymon A. Beal's avatar

Certainly, you can get away with a more boring style if you're writing for specialist audiences than for popular ones, because the specialists are already selected for ability to read and retain dry texts.

The thing where analytic-philosophy thought experiments tend a bit more whimsical (compared to other elements of philosophical writing) is something of an arbitrary genre convention, I think, enabled by thought experiments lending themselves somewhat more to this kind of style than things like definitions and proofs. But on the whole, I think it's a *good* genre convention, because it makes the field's work and results more accessible to laypeople.

Expand full comment