91 Comments
User's avatar
David Riceman's avatar

An alternative explanation of your bird example: people care about species rather than individuals. Even species is generic. Do they differentiate between barn owls and great horned owls? I doubt it. Do they differentiate between my dog and an undifferentiated dog? Probably. But I don't think they put "number of injured birds" in the same category as "number of injured neighbors".

Expand full comment
sidereal-telos's avatar

"People care about species, not individuals" mostly explains how people feel about endangered animal conservation, I don't think it can explain the oil spill result because none of the scenarios would endanger an entire species of bird.

Expand full comment
PowPow's avatar

I actually think there's several alternative explanations of this study. One goes teh way you said, another one goes: people care about injured animals, and want to save them, so they calculate a number that they can afford to help those animals, and it's kinda already the highest number they can afford - so going up with the birds doesn't make sense, because even a few birds would already bring them to their max amount of dollars.

Expand full comment
JC's avatar
Mar 6Edited

I disagree with this, because you're ignoring differences in values. You can't do the math to tell you what values you should have.

> Certain causes, such as renovating the Lincoln Center to have better acoustics ($100 million), no doubt are many many more orders of magnitude less effective.

This is a little silly. Less effective... at what? Less effective at saving lives, sure, but that wasn't the intended purpose of the money. Saving lives isn't the only goal or only thing that matters. You might as well say the deworming donations are less effective because they are less effective at improving acoustics, promoting art, or stopping global warming.

This is about values, ultimately, and everyone is going to have different values. I doubt Geffen wanted to donate to do the most good or save the most lives - that's simply not what philanthrophy is about most of the time. He wanted other things he valued, like honor and recognition, and he presumably wanted to support an organization and cause that he personally cared about. There's no amount of doing the math that can possibly tell you what values you should have.

> Farmed animals vastly outnumber pets in need of shelters, and yet pets are far more likely to receive donations than farmed animals.

Well, yes, because people care a lot more about pets than they do about farm animals. This isn't something you can "do the math" on and get the correct answer about which animals you should value more than others.

I made a comment about this in a previous post, but I just find that Vox article extremely disturbing...

> "It’s useful to imagine walking down Main Street, stopping at each table at the diner Lou’s, shaking hands with as many people as you can, and telling them, 'I think you need to die to make a cathedral pretty.' "

> "If I were to file effective altruism down to its more core, elemental truth, it’s this: 'We should let children die to rebuild a cathedral' is not a principle anyone should be willing to accept. Every reasonable person should reject it."

Well, no, that has nothing to do with effective altruism whatsoever. There's no way you can "do the math" to determine how much you should value art and culture.

If you decide you care about saving lives to the exclusion of all else, then sure, you can do the math for the most effective way to save lives. But that's just not the only thing people care about.

Most of the donations to rebuild the cathedral were unlikely to go towards saving dying kids. People donated to that specific cause because they cared about it.

The problem with that quote is that there are always dying kids, so if you see any money spent on anything as "letting children die" you're left with the conclusion that no money should ever be spent on art, or anything else that doesn't directly facilitate lifesaving. That can't be correct.

I personally value art a lot more than saving strangers' lives. It's just a difference in values, and it's bizarre to me that that author thinks every reasonable person should not value art enough to spend money on it.

There was some LW post that quoted Hume on this topic, saying something like "reason should serve the passions." You have to figure out what you value first, and then try to do it effectively. And not everyone values saving lives over everything else.

Expand full comment
Ozy Brennan's avatar

Even if you exclusively care about art and don't give a shit about human wellbeing, $100 million to renovate the Lincoln Center for better acoustics is an enormous waste of money. (I went to the Lincoln Center pre-renovations. Its acoustics were fine.) For that money, you could give forty of your favorite artists enough money that they'd never have to work a day job again.

Expand full comment
JC's avatar

I mean, you could do a lot of things with it. But I think it's a mistake to judge someone else's donation based on how it measures up to your yardstick or your values.

"Ozy thought the acoustics were fine" isn't necessarily the final standard for the board of an esteemed musical venue!

I just think everyone has different values and it doesn't really make sense to say "well really your money should have gone to my pet cause." It's Geffen's money - the right place for it to go is wherever he wants it to go. And I'm sure Geffen has also donated to artists or other charities.

The problem with your logic is that it seems to imply that the Lincoln Center should have donated the $85 million to malaria nets or deworming or something, and by that logic any organization should be doing similar things with all their money. But then we'd never have any other spending.

Expand full comment
Whenyou's avatar

Yeah I doubt people saying "I value the life of an American no more than a Nigerian!" would support spending the entire tax budget of their country on development in Nigeria, given that would be much more cost effective when it comes to saving lives.

Expand full comment
Eschatron9000's avatar

Indeed I don't support that, since my country would collapse completely, and therefore have no tax revenue in future years and thus save 0 lives anywhere.

Expand full comment
Doug S.'s avatar

https://gwern.net/doc/philosophy/ethics/2011-yvain-deadchild.html

Or maybe, people actually do save all the dying kids that can be saved for relatively little money, so a $50,000 new car doesn't cost 150 dead children anymore?

(According to GiveWell's current estimates, the exchange rate between dead children and dollars is currently somewhere around $3000.)

Expand full comment
JC's avatar

Is that by Gwern or by Scott?

Expand full comment
Matrice Jacobine's avatar

Yvain is Scott.

Expand full comment
JC's avatar

Yes I know that, but it's on Gwern's site and does not appear to be signed, it just has Yvain's name in the URL, so I wasn't sure.

Expand full comment
JC's avatar

I’m interested in knowing if we’ve gotten any low-hanging fruit! That is, is any of the rise in that exchange rate due to saving all the cheap dying kids, or is it just things costing more?

Expand full comment
JC's avatar

Thanks for that... I had seen it before and vaguely remembered that some EA wanted to use dead kids as a unit of currency! I was thinking of that as I wrote the comment but couldn't remember the details.

Expand full comment
In-Nate Ideas's avatar

There’s pretty much unlimited art and culture available for free online or just in life, which presents no financial tradeoff with saving lives. Perhaps you should clarify that you value the specific kinds of art and culture that require heavy financial investment over saving lives?

Expand full comment
JC's avatar

Huh? How is there unlimited art and culture for free? Someone had to create that.

Expand full comment
In-Nate Ideas's avatar

I mean free to experience at the margin. We’ve already made massive investments in art as a society, so an individual doesn’t need to spend money to access art - just to access certain expensive artistic experiences.

Expand full comment
JC's avatar

Ok, but the financial tradeoff isn't really about accessing the art; it's about creating it and maintaining it.

And all the resources and time used to create that art that is being accessed could have been directed to saving dying kids!

Expand full comment
In-Nate Ideas's avatar

I think perhaps the salient point is that we could, at current margins, shift massive amounts of funds from the arts to saving children, (plausibly even to the point of diminishing returns for lifesaving interventions) and we would still have plenty of art and culture.

Expand full comment
JC's avatar

So when is it ok to spend time or money on something else?

Expand full comment
Bob Jacobs's avatar

What about the critique that Effective Altruists don't actually "do the math" but, at best, do "part of the math", or "do the math insofar as it strengthens the desired argument, but no further". See for example: Mistakes in the Moral Mathematics of Risk ( https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/ ), Exaggerating the risks ( https://reflectivealtruism.com/category/exaggerating-the-risks/ ), and Better vaguely right than precisely wrong in effective altruism ( https://eprints.gla.ac.uk/289530/1/289530.pdf )

Expand full comment
Ozy Brennan's avatar

Haven't read the paper but will take a look. I enjoy Reflective Altruism (though I don't always agree) but I generally take this particular criticism he levies as a "effective altruists should do EA harder, fifty Stalins!" sort of critique. (Which is good! I agree we should do EA harder.)

Expand full comment
Bob Jacobs's avatar

The paper is in the same vein as the critique EA should care about systemic change more, ≈tldr; by inheriting the faulty marginalist assumptions of libertarians/neoclassical economics, it ignores interventions that do excellent work but don't fit in that framework. (Similar to my critique here: https://tinyurl.com/2pj69euy )

Expand full comment
JC's avatar

Wouldn't that be the opposite vein?

Expand full comment
Bob Jacobs's avatar

Well, no, but I understand why you would think that. The observation that the neoclassical crop of quantitative assumptions don't have good predictive power leads some to take qualitative data more seriously (including me, though it helps that I finally understand how qualitative data analysis is supposed to work, so maybe I'm biased by my excitement about this new methodology), but the paper actually doesn't do that.

It instead goes the route of endorsing a different crop of quantitative assumptions, ones informed by (orthodox) *sociology*, instead of (orthodox) economics.

(for the record, I do like that crop, but think we should do qualitative data analysis on top of that).

Expand full comment
Matrice Jacobine's avatar

Not sure about the use of "orthodox sociology" to describe quantitative sociology here, seems to create false equivalence between sociology (where both qualitative and quantitative methods are part of 101) and economics (where any approach giving any weight to any other science¹ is considered heterodox).

¹: not even just social sciences, consider ecological economics, etc.

Expand full comment
Bob Jacobs's avatar

Yeah, mhhh, maybe I should've used the term 'standard'? It was more that I wanted to signal that I know there are some some sociologists that don't agree with using quantitative threshold models (because they think it's too reductive, like Becker) and economists that don't agree with the standard economics methodology (because they think it's not empirical enough, like Angrist and the other people I discussed in my economics post https://bobjacobs.substack.com/p/the-ea-community-inherits-the-problems )

Expand full comment
Matrice Jacobine's avatar

I'm not sure that's true? Thorstad get a fair amount of meta and longtermist funding so I get the sentiment he is internal opposition to some extent but he's also friends with Torres and some of his criticisms is similarly deep and structural (the billionaire philanthropy and racism sequences in particular).

Expand full comment
Ozy Brennan's avatar

I've now read 'Better vaguely right than precisely wrong.' It's a reasonable criticism of GiveWell that the authors are mysteriously applying to the entire EA movement, which in fact recommends many of the 'lumpy' charities they discuss (animal advocacy, existential risk).

Expand full comment
JC's avatar

Ooh what do you think of this - this is my problem with EA, in a nutshell:

> Williams thought that the reason that utilitarianism is incompatible with moral integrity is that it alienates people from what may matter most to them: their projects, their relations with those they care about, and so on. It does this because it implies that their projects, commitments, attachments, and values matter no more in themselves, or from what Sidgwick called ‘the point of view of the universe’, than those of other people. One’s own projects and attachments therefore cannot have any priority or privileged role in the determination of how one ought to live or what one ought to do. To accept this, Williams thought, would be to surrender all that makes one’s life worth living.

https://forum.effectivealtruism.org/posts/hvYvH6wabAoXHJjsC/philosophical-critiques-of-effective-altruism-by-prof-jeff

Expand full comment
Jasnah Kholin's avatar

I tend to find this sort of criticism baffling. we live in empty uncaring universe - but we care. and yes, OF COURSE my relationships are not special from the universe POV. so what? they are special TO ME, and this is enough.

my best attempt to understand this criticism is that the person expressing it doesn't have the inner strength to say "things matter because they matter TO ME", so they can't accept the the universe doesn't care without accepting that those things are actually not important.

I probably fail in theory of mind here. but i find this criticism zero convincing. on the other hand, i didn't try to make utilitarianism the one goal of my live. i have some amount of money and optimization that i allocate to impartial world improvement, and in other times i allocate my time to reading and writing blogs and books, celebrate Passover with friends, work, go to walks, etc.

so i find all this criticism based on factually-incorrect emprical claim - " it alienates people from what may matter most to them".

i think there are schools of thought in EA that may be inclined to such thinking, but not CEV-based school of morals. as it was said - Your Utility Function is Your Utility Function.

https://www.lesswrong.com/posts/cN9RJSJQGyqqergmC/your-utility-function-is-your-utility-function?view=postCommentsOld&postId=cN9RJSJQGyqqergmC

Expand full comment
JC's avatar

I think you're confused...

> this criticism is that the person expressing it doesn't have the inner strength to say "things matter because they matter TO ME", so they can't accept the the universe doesn't care without accepting that those things are actually not important.

What's being criticized is exactly the view that if the universe doesn't care, those things are actually not important.

To make this more concrete, consider a trolley problem where you have to choose between a loved one and five strangers. I believe the correct decision for you to make is to save the loved one, whereas the correct decision for a neutral person to make is to save the five strangers.

A utilitarian, in the view being criticized, would say that the correct decision, the ethical decision, for you is to determine things from a neutral perspective. This alienates you from what matters to you.

Expand full comment
Jasnah Kholin's avatar

well, i saw exactly zero people who think that "if the universe doesn't care, those things are actually not important.". i expect that someone who believe that exist somewhere, but i don't think it's mainstream EA position.

i will save the loved one, and expect 99%+ of EAs to do that, and expect that EA would have not criticize the decision.

actually, i think i already write my opinion about that, here: https://thingofthings.substack.com/p/effective-altruism-maximizing-welfarist/comment/93029554

copying here:

I almost agree, with one disagreement. my understanding of EA is more... directional. you say "Effective altruism involves going all-in on these intuitions everyone shares." and... this is just don't look true - most EAs does not go all-in. they just... go more then the median, that is very not-that.

and that is my sentence on EA. my morality is some CEV that i myself have no idea what it, but i can see the direction clearly. there is level of consequentialism that will be too much for me, but on the current margin, i want to move charity toward welfarism and consequentialism. I also very suspicious of maximization per-se, but I'm much more maximizing then the median.

i think there is tails-come-apart thingy going one now. because there is such not-market inefficiency from WC point of view (or maybe just care-foundation point of view?), a lot of people who want more of that are part of EA. but if we win, if we feed all the hungry and prevent all the torture (and secure the future against treats), then the disagreements will surface.

there is a lot on now-unseen disagreement in the form of value A is 10 more important that value B or the other way around, that is dwarfed by best interventions being orders of magnitudes better.

in a world when people are 10-30% welfarist-conseqencialist, everyone who is more then 50% WC look the same, but there is actually big difference between 50% CW and 90% CW (and i don't believe in 100% CW).

also, i really don't think EA is that much maximizing. it's just... humans are not naturally strategic, the median maximization is, like, 3%, so people who are 20% maximizers look like a lot.

EAs are nit utilitarians, we just think utilitarianism should take more place in out inner coalitions. somewhere between 1%-10%, to 50% for the most extreme ones.

Expand full comment
JC's avatar

No, the mainstream EA and consequentialist position is that the right thing to do is to save five strangers, and they think the feeling that you'd want to save a friend is just immoral selfishness, even though it's what they'd actually do.

Expand full comment
Matrice Jacobine's avatar

The double standard between global health EAs and animalist EAs¹ and longtermist EAs² is worth pointing out on its own. Ben Kuhn has talked about it here:

https://www.benkuhn.net/slippery/

https://forum.effectivealtruism.org/posts/M9RD8S7fRFhY6mnYN/why-nations-fail-and-the-long-termist-view-of-global-poverty

It should be noted that Acemoglu, the latest Nobel-Prize-winning king of institutional development economics, is p anti-EA. And Sen and Nussbaum were basically the "systemic change" EA critics before EA even existed (with Nussbaum even attacking Unger's notorious proto-EA paper specifically).

¹: who are the least likely to actually identify as EA IME (I have family members who work in OPP-funded animal rights charities)

²: who still has the adjacent problem of being unwilling to interface with short-term AI ethics work, except for FLI, which happens to be the longtermist org the most hostile to EA leadership

Expand full comment
JC's avatar

Do you have a reference to Nussbaum attacking Unger's paper? Which paper?

Expand full comment
JC's avatar

https://www.lrb.co.uk/the-paper/v19/n17/martha-nussbaum/if-oxfam-ran-the-world

Is it this review of Living High and Letting Die?

Expand full comment
Matrice Jacobine's avatar

Yes.

Expand full comment
JC's avatar

Wow, this is great:

> Williams thought that the reason that utilitarianism is incompatible with moral integrity is that it alienates people from what may matter most to them: their projects, their relations with those they care about, and so on. It does this because it implies that their projects, commitments, attachments, and values matter no more in themselves, or from what Sidgwick called ‘the point of view of the universe’, than those of other people. One’s own projects and attachments therefore cannot have any priority or privileged role in the determination of how one ought to live or what one ought to do. To accept this, Williams thought, would be to surrender all that makes one’s life worth living.

I think this is exactly right.

Expand full comment
Victualis's avatar

The bird example doesn't belong here, among a bunch of well argued points, even if EY cited it back in 2007. If you save 2000 birds and the total population is 4000, that might prevent extinction. But 200000 might be out of 20m so probably less than die of natural predation. The way the question is asked, subtle nuances of the person administering the test, order of the questions or any other questions asked, recent news events about species extinctions all matter. The Desvousges et al. study was published in 1992 and I don't think it used especially sound methodology or analysis. Carson 2012 says: "in this study respondents were also told that the population of birds was very large, with the percent of birds being killed in the three split-sample treatments being similar: (a) “much less than 1% of the population”, (b) “less than 1% of the population”, and (c) “about 2% of the population." I therefore don't think the reported results contain enough signal, except maybe "people care slightly about saving some birds but not that much" and "people will pay a bit more if the numbers are likely to be a more significant fraction of the total population". More careful later CV studies resulted in a more nuanced model; people aren't actually as scope insensitive as that particular study makes them out to be.

Contingent Valuation: A Practical Alternative When Prices Aren't Available - American Economic Association

https://www.aeaweb.org/articles?id=10.1257/jep.26.4.27

Expand full comment
SkinShallow's avatar

I agree that numbers should be looked very carefully for mass/abstract level decisions.

But -- you will always run against the difference in values that JC is talking about in their comment. So numbers are a very good way of choosing which malaria charity to give to, or maybe even to choose whether to give to a malaria charity or an AIDS one. But they will ABSOLUTELY not tell you whether to give to either of those or a local arts initiative or even food bank down the road.

Another is a fundamental advantage of and at the same time a fundamental problem with utilitarianism, which is that it treated all of a kind (dollars, human lives, DALYs etc as exactly the same). It's a reasonable assumption for a public body (in my opinion) but I'd argue that it's not at all obvious for individuals.

So you're really not just arguing for looking at numbers. You're also smuggling a particular value system here (lives/DALYs is all that matters). And I think it's one that's very much defensible but it's not obvious at all.

There are piles of dead bodies in the history of humanity, many perished in horrific circumstances or much too early in life. Thinking of them gives me a daytime equivalent of nightmares, and in the ideal world I'd rather those piles were smaller and less stacked with people under 80. But it also gives a kind of perspective.

So, yes, like JC, I don't think it's entirely obvious that nobody should ever die to save (or, for that matter, to build) a cathedral, because while the opposite seems obscene indeed, choosing that every single time feels like it's edging towards something similar to the repugnant conclusion.

I think it also implies that people should never fight/be willing to die for, risk their children's lives and kill for things like "independence" or "freedom" or "national sovereignty" (assuming the conquering party was NOT genocidal). Yet many, many do. Would I? Probably not (assuming the "non genocidal" condition was definitely fulfilled). But I don't think those who do are stupid or even clearly wrong, in values sense.

Expand full comment
Ozy Brennan's avatar

I talked about values earlier in the series!

Expand full comment
JC's avatar

Link?

Also see my comment! This post seems to assume saving lives is the highest/only value!

Expand full comment
Ozy Brennan's avatar

I believe in your ability to look on the front page of this substack: https://thingofthings.substack.com/

Expand full comment
JC's avatar

Sadly that ability does not include the ability to determine which post you were thinking of! Is it one of these?

https://thingofthings.substack.com/p/what-do-effective-altruists-believe

https://thingofthings.substack.com/p/effective-altruism-maximizing-welfarist

If so, I don't see how they address that point!

Expand full comment
Matrice Jacobine's avatar

I assume the latter?

Expand full comment
JC's avatar

Yes, definitely this, in particular about the repugnant conclusion!

Expand full comment
Rose's avatar

I am half bought into EA (as my donations show - half go to EA causes), but the thing that I can't fully get on board with is the rigidity of the underlying values.

I care deeply about preserving a diversity of ~intelligent life. Whales specifically are dear to my heart, partly because they're so obviously intelligent and their population is under so many threats. This ought to be a cause that I can approach numerically: Which animals are the most intelligent? Which are most endangered? What's the best way to help them? ...but when I brought this up in an EA space people were essentially like "we don't care about whales because there aren't a lot of them. By mass, other animals are more important, so we won't help you evaluate this." But the whole point is that my goal isn't "reduce suffering by mass," it's "preserve a variety of types of brains!" I would give a lot more to keep a healthy breeding population of each whale breed alive than I would for the equivalent number of marginal whales.

Which is like, fine, no one has an obligation to care about my goals. But EA seems to have centralized around some very specific goals and seems unfriendly to other values as "not EA." Even when they would benefit from an EA approach.

Expand full comment
Jasnah Kholin's avatar

so my country have EA Israel site, and local site that have, beside global charities, also impact evaluation on local charities. for those of us who care about impact but also not impartial and prefer their neighbors to strangers.

I think it's good, and i remember vaguely someone saying something similar about local EA like that.

which is to say - if you will create something like that, it will happen. EA is unfriendly toward obligation-to-those-closer-to-you frame, and yet, the site still exist.

this world is surprisingly Do-ocratic.

Expand full comment
Rose's avatar

Actually, thinking about this more, it feels like a weird lack to me. There are EAs who care about only humans. There are EAs who care about all living things, even thinking about the suffering of bacteria. And there are plenty of EAs in between who care about animals but not insects, or insects but not bacteria. Am I seriously the only person who cares a lot about elephants, whales, monkeys, parrots, etc, and much less about chickens and fish?

Expand full comment
JC's avatar

I'd imagine a lot of EAers would see this as a bias to overcome, since chickens and fish are culturally seen as food and valued less for that reason ("carnism"). If you had grown up in India you'd probably value cows a lot higher since they are culturally seen as pets there.

But we care about the things we care about. Reason should serve the passions.

Ozy said "Farmed animals vastly outnumber pets in need of shelters, and yet pets are far more likely to receive donations than farmed animals." That isn't surprising at all - most people care a lot more about pets than they do about farm animals. I don't think we can say they are wrong for that.

Expand full comment
JC's avatar
Mar 7Edited

Well by their logic we shouldn't care at all about endangered or threatened species.

I don't understand the idea behind looking at mass. What is their theory behind that?

Expand full comment
Jasnah Kholin's avatar

the theory is to prevent suffering. I'm, personally. very much fine with killing animals, but not fine at all with torturing them. and we torture a lot of animals now.

Expand full comment
JC's avatar

So why do we care about *mass* specifically?

Expand full comment
Jasnah Kholin's avatar

oh, i saw it as shortcut to numbers, modulus ability to suffer. EA does not prioritize by mass.

Expand full comment
titotal's avatar

I support the use of quantitative reasoning in philanthropy, and I agree with most of this post. I will defend donating to anti-malaria efforts, even if the effects are uncertain, because they are as certain as we can get about large scale benefits of philanthropy.

However, at a certain level of uncertainty, I start to object, because I think we are running into "garbage in, garbage out" territory. Someone says that there's a 20% chance of extinction, conditional on AGI existing. Where the hell does that number come from? Do you have a base rate of human extinctions to draw from? Did they run monte carlo simulations and 20% of them came up doom? That's not a number, that's a vibe.

When we use numbers to cover the impact of malaria net donations, that's a number that comes from actual studies with error bars. That's doing quantitative analysis. Math + math = math. But on the other hand, vibe + math = vibe that you're pretending is math. And I'm not saying you can't donate based on vibes: there are lots of good causes that aren't easy to quantify. But don't pretend you're making the decision based on math when you aren't.

Expand full comment
JC's avatar

I don't think that's fair. It's a rough estimation, which is the best you can do on some things.

Expand full comment
Brock's avatar

If you're buying lottery tickets in order to daydream about being rich, buying one lottery ticket is just as effective as buying multiple tickets. So the sensible number of lottery tickets to buy is one.

Expand full comment
JC's avatar

Or maybe the sensible number is 25 million:

https://www.vice.com/en/article/texas-lottery-winner-bought-every-ticket/

Expand full comment
JC's avatar

No, because buying zero is just as effective, or at least almost as effective - you can always daydream about being rich.

Expand full comment
Brock's avatar

Psychologically, I find it’s a lot easier to daydream about being rich with a lottery ticket in my hand. Maybe you’re better at daydreaming than I am.

I don’t buy them anymore, for ethical reasons, but when I did, I would buy exactly one.

Expand full comment
JC's avatar

My daydreams about being rich involve me having saved my bitcoins back in 2012 instead of spending them on Silk Road

Expand full comment
Victualis's avatar

Do you also daydream about losing your bitcoins because you lost the key, or the exchange was hacked/went bust? I find it a good way to counter my wilder crypto daydreams.

Expand full comment
JC's avatar

Why "for ethical reasons"?

Expand full comment
MoltenOak's avatar

RE concert ticket insurance: I've done the math now and I think, in this case, you haven't? It seems like insurance can very easily be worth it! Though please correct me if I misunderstood :) I've written it up as a post:

https://moltenoak.substack.com/p/concert-ticket-insurance-when-to

EDIT: I think I've misunderstood what you were saying? You probably meant that for the "average person", buying a ticket insurance isn't worth it (cause that's how the insurance makes a profit), rather than for an individual. So, it's a failure if "sure, why not" is the entirery of your *reasoning*, rather than the *result* of your reasoning. That makes sense.

Expand full comment
Anonymous Dude's avatar

This is very beautifully put and is more or less the way I thought back when I was still idealistic. Of course moral intuitions are subject to the same quantitative laws as everything else!

(That died around 2000 or so, so I never had the chance to become an EA; I probably would have if I were born about 25-30 years later.)

Expand full comment
MoltenOak's avatar

Someone please explain why this is mathematically unsound somehow: “I bought insurance on my concert ticket so I’ll get a refund if it turns out I can’t go.”

Expand full comment
Jasnah Kholin's avatar

the way insurance work is that you should expect to lose money by buying insurance, in the sum of all worlds.

you should buy insurance for catastrophic loses you cannot bear, but not concert tickets. if you go to concerts 100 times and can't go some times, and i one world you brought insurance and in the other you didn't, you have more money in the world where you didn't buy insurance.

Expand full comment
MoltenOak's avatar

Actually, I've worked through the math and I'm now convinced this isn't the case. I've written it up as a post:

https://moltenoak.substack.com/p/concert-ticket-insurance-when-to

In particular, whether the insurance is worth it or not depends on my (subjective?) estimate of how likely I'll make it to the concert. If I'm very uncertain, then it can very easily be worth it. Moreover, it does *not* depend on the ticket price - at least not mathematically (though in practice the value I get paid back is of course based on the ticket price).

Expand full comment
Jasnah Kholin's avatar

I agree with that - if you consider yourself "lemon" it may worth it to buy insurance. but this is not how the example in the post went.

Expand full comment
MoltenOak's avatar

EDIT: Oh I think I understand. You mean if one tends to have bad luck with such things? Yeah, the probability definitely matters, but notice that even a low probability warrants buying if the ratio between cost vs. payout is correspondingly low!

EDIT 2: I think I get it now - you were talking about the "average person" not needing to buy insurance, rather than a particular person actually trying to figure out whether it's worth it *for them*.

I don't understand the first part (but I'm not a native speaker). Could you clarify under which conditions you think my analysis applies and when it doesn't (such as within the post, you seem to suggest)? :)

Expand full comment
Jasnah Kholin's avatar

i'm not native speaker too, this is the real reason i didn't write the thousands of words i want to :-)

basically, the insurer always come with profit from insurance. that's all the point. all the people who get insurance, in net, lose. so i f i just bought new washing machine, and got offer of insurance, should i have one? well, no. in expectation, if i save the cost of the insurance, bu the time i need the insurance, i can use the saved cost to pay for the fixing or new machine, and have some spare.

because i have no reason to believe i know better then the insurer what the Insurance premium should be, and if i pay the premium it was calculated in such way that i should expect to lose.

when this is break? when i believe i know better then the insurer, when i have information that they don't have. but even then, i should take into account that the people who buy insurance are not random, that people tend to buy insurance if they expect to need it, and that the insurance company calculated it as part of the insurance.

the right place to use insurance is when being positive in expectation is not the goal, because the negative income i try to insure against will ruin me, so i do acausal trade with myself, shifting a little money from the worlds when i fine, to the world when everything is lost, as utility is no linear with money. except instead of doing it myself the insurance company helping to set up this trade, and take some money for the service.

if i can self-insure, if i have the liquidity to "pay myself" the wash machine insurance, then across all those transactions, i expect to have more money then if i used the insurance company service to set this up.

and indeed, if you look on wash machine insurance in my country, you will see that people who buy insurance frequently pay the cost of new machine in monthly contributions by the time they need to insurance.

Expand full comment
MoltenOak's avatar

Thanks for the detailed reply :) Interesting perspective regarding moving resources across possibls worlds.

I understand and agree with the general principle of insurance leaving the average person to be worse off, on average. (Though as you say, in critical situations this is still worth it.) And I agree that, to outplay the insurance, I need to have information that they don't. Often, I don't have that: Before I can get many types of insurance, they will ask me questions to determine how much I'll have to pay.

However, I think there are two factors giving the individual an advantage. Firstly, for some insurances such as ticket insurance, I don't have to answer any questions. Thus, to the insurance company, I am exactly an average customer, for which they know how to compute premiums. But I myself am not an average customer: I can better estimate the risk of not making it to the event than they can, at least in principle. Secondly, I can profit from the decisions of people who put less thought into buying an insurance. If some people get insurance even though they don't really need it (“just to make sure” and/or out of risk aversion), then this can lower the price for me too, thus making it more likely that the insurance is worth it for me.

Overall, I guess the central question is how I can arrive at a better estimate for the risk that I won't make it than the insurer can. This, however, seems totally possible for me. And once I have that, I can just do the computation I've outlined in my post and be done.

Expand full comment
JC's avatar

I'd love to know what you think of this critique of EA:

https://www.lrb.co.uk/the-paper/v19/n17/martha-nussbaum/if-oxfam-ran-the-world

Expand full comment