Re: footnote 2. I don’t think their argument is that the Nazis will win the rational debate: it’s that winning the rational debate will not prevent the Nazis from winning, like they won in Germany before WWII.
The items in the list aren't bad but it seems more like the result of a brainstorming session than a well-thought-out ethical system. As a result, it's not only hard to predict what actions the system suggests but also what facts determine what actions are best. Some preferences not mentioned specifically will get shoehorned into the list once mentioned, and others won't, and this arbitrariness is hidden away.
For example, when is the social pressure of "everyone viciously mocks you if you do this" a restriction of freedom and when is it an expression of freedom? Does it only count as a restriction of freedom if the thing being mocked is the leaving of one's house? Where's the boundary line, and, more importantly, what computation produces that boundary line?
I think you could argue preference utilitarianism is more fair/egalitarian because it lets moral patients state their preferences and then the ethical thing to do is something like, "work out how to satisfy as many preferences as possible". Whereas it seems like a capabilitarian approach ends up being shaped by the preferences of the person who selected the capabilities, or the preferences of the people they talked to when determining which capabilities to select.
I also think preference utilitarianism is more promising on the computational front, e.g. voting could be considered a primitive computational implementation of preference utilitarianism, and you can imagine more advanced computational implementations like quadratic voting, sortition, targeted opinion polls plus machine learning, etc.
> I do, incidentally, disagree with utilitarians about some things. I do think if an adult with all the central capabilities chooses, of their own free will, at reflective equilibrium, to make themself miserable for no greater benefit whatsoever, then I might think this is a bad decision but neither I nor society should be permitted to stop them
As the friendly neighbourhood preference-utilitarian, I feel compelled to note that I'm on your side on this one. If their preference is genuinely to be sad, it is sort of the whole deal that I say "fair enough". So I don't think you're "disagreeing with utilitarians" on this point; you're disagreeing with a specific subset of utilitarians.
I think that even a hedonic utilitarian could support this instrumentally.
I.e. that if we try to coerce people into being happy we'll get a bunch of false positives where we end up bullying people with unusual utility functions.
Having read Nussbaum >5 years ago, I also remember having the sense that the justification for the capabilities was very incomplete. Also, I thought the implications she drew were oddly narrow. Like in one paper, having gone through the list of capabilities, she then says, "Therefore, we should fund this particular school in India that educates girls to become future politicians."
I lack your intuitions about which of these are the key important ones (probably because I don't particularly value 'what it means to be human' except as an instrumental "many humans seem to like these things" matter, with the liking being the important part and the humanity being something to throw out the moment it trades off against the liking), but the general idea here of "put the measurement criteria front-and-center" is a very nice one, such that I'm finding the overall model here surprisingly tempting despite my disagreement over the specifics.
My big concern, looking at this list, is that, even given the "people are free to not exercise their capabilities once they have them" disclaimer, I'd still fully expect any serious project to give more people these capabilities to have the side effect of forcing them into exercising them in many cases. If people are required to have the capability to use their minds in ways informed by "an adequate education", I'd expect a lot of them to be forced into said education, with no option to not-exercise that capability on offer. Knowing the politics fandom, I similarly expect that most efforts to make people "able to participate effectively in political choices that govern one's life" would have the side effect of inflicting unwanted political-choice-participation on people who would rather stay out of the whole thing. Et cetera.
So, on the whole, I think the list here suffers from the substantial flaw that it's pushing for freedoms only in one direction? If you're pushing to make people capable of X, and not also explicitly pushing to make them capable of not-X, you're going to end up with a lot of people forced into X even if on some theoretical level that's not an objective you're directly targeting.
Freedom of thought and association are important for humans, but “life” and “emotion” are already on that list. I don’t see an intuitive way to put association above life.
People hate being addicted to drugs, but take them anyways. Social media and slot machines are designed to keep people mindlessly engaged but not really having fun. I don’t think it is immoral to take away someone’s heroin, especially if they told you they want to quit.
By having a fixed list, you won’t notice if something is missing. As a possible example, how about a capacity for self-improvement? Utilitarians (and most other moral theories) would say that it is important that people have the capacity to self-improvement.
I feel like capabilitarianism focuses a lot on the ability to makes choices, and not much on the quality of the options on offer. If my local ice cream shop starts making tastier ice cream, that makes my life better, it's a good thing. I prefer a form of utilitarianism that assigns utility both to "happiness", vaguely defined, that includes things like tastier ice cream, and "freedom", vaguely defined, that includes Nussbaum's list.
Re: footnote 2. I don’t think their argument is that the Nazis will win the rational debate: it’s that winning the rational debate will not prevent the Nazis from winning, like they won in Germany before WWII.
The items in the list aren't bad but it seems more like the result of a brainstorming session than a well-thought-out ethical system. As a result, it's not only hard to predict what actions the system suggests but also what facts determine what actions are best. Some preferences not mentioned specifically will get shoehorned into the list once mentioned, and others won't, and this arbitrariness is hidden away.
For example, when is the social pressure of "everyone viciously mocks you if you do this" a restriction of freedom and when is it an expression of freedom? Does it only count as a restriction of freedom if the thing being mocked is the leaving of one's house? Where's the boundary line, and, more importantly, what computation produces that boundary line?
I think you could argue preference utilitarianism is more fair/egalitarian because it lets moral patients state their preferences and then the ethical thing to do is something like, "work out how to satisfy as many preferences as possible". Whereas it seems like a capabilitarian approach ends up being shaped by the preferences of the person who selected the capabilities, or the preferences of the people they talked to when determining which capabilities to select.
I also think preference utilitarianism is more promising on the computational front, e.g. voting could be considered a primitive computational implementation of preference utilitarianism, and you can imagine more advanced computational implementations like quadratic voting, sortition, targeted opinion polls plus machine learning, etc.
> I do, incidentally, disagree with utilitarians about some things. I do think if an adult with all the central capabilities chooses, of their own free will, at reflective equilibrium, to make themself miserable for no greater benefit whatsoever, then I might think this is a bad decision but neither I nor society should be permitted to stop them
As the friendly neighbourhood preference-utilitarian, I feel compelled to note that I'm on your side on this one. If their preference is genuinely to be sad, it is sort of the whole deal that I say "fair enough". So I don't think you're "disagreeing with utilitarians" on this point; you're disagreeing with a specific subset of utilitarians.
I think that even a hedonic utilitarian could support this instrumentally.
I.e. that if we try to coerce people into being happy we'll get a bunch of false positives where we end up bullying people with unusual utility functions.
Having read Nussbaum >5 years ago, I also remember having the sense that the justification for the capabilities was very incomplete. Also, I thought the implications she drew were oddly narrow. Like in one paper, having gone through the list of capabilities, she then says, "Therefore, we should fund this particular school in India that educates girls to become future politicians."
I lack your intuitions about which of these are the key important ones (probably because I don't particularly value 'what it means to be human' except as an instrumental "many humans seem to like these things" matter, with the liking being the important part and the humanity being something to throw out the moment it trades off against the liking), but the general idea here of "put the measurement criteria front-and-center" is a very nice one, such that I'm finding the overall model here surprisingly tempting despite my disagreement over the specifics.
My big concern, looking at this list, is that, even given the "people are free to not exercise their capabilities once they have them" disclaimer, I'd still fully expect any serious project to give more people these capabilities to have the side effect of forcing them into exercising them in many cases. If people are required to have the capability to use their minds in ways informed by "an adequate education", I'd expect a lot of them to be forced into said education, with no option to not-exercise that capability on offer. Knowing the politics fandom, I similarly expect that most efforts to make people "able to participate effectively in political choices that govern one's life" would have the side effect of inflicting unwanted political-choice-participation on people who would rather stay out of the whole thing. Et cetera.
So, on the whole, I think the list here suffers from the substantial flaw that it's pushing for freedoms only in one direction? If you're pushing to make people capable of X, and not also explicitly pushing to make them capable of not-X, you're going to end up with a lot of people forced into X even if on some theoretical level that's not an objective you're directly targeting.
Yeah, I think I second this concern.
Freedom of thought and association are important for humans, but “life” and “emotion” are already on that list. I don’t see an intuitive way to put association above life.
People hate being addicted to drugs, but take them anyways. Social media and slot machines are designed to keep people mindlessly engaged but not really having fun. I don’t think it is immoral to take away someone’s heroin, especially if they told you they want to quit.
By having a fixed list, you won’t notice if something is missing. As a possible example, how about a capacity for self-improvement? Utilitarians (and most other moral theories) would say that it is important that people have the capacity to self-improvement.
I feel like capabilitarianism focuses a lot on the ability to makes choices, and not much on the quality of the options on offer. If my local ice cream shop starts making tastier ice cream, that makes my life better, it's a good thing. I prefer a form of utilitarianism that assigns utility both to "happiness", vaguely defined, that includes things like tastier ice cream, and "freedom", vaguely defined, that includes Nussbaum's list.
Thanks for writing your blog. It's been included in our blog post on our favorite Autistic blogs: https://anautismobserver.wordpress.com/2022/06/18/what-are-your-favorite-autistic-blogs/