I don't think it's true at all that people generally agree that it's bad to hurt beings just because they're sentient. I think you're in a bubble of vegans and EAs.
There's the common vegan argument, "just like dogs evidently have feelings, so do cows. Look at these clips of these cute cows having fun or feeling pain". I don't think I've met a single person who doesn't think cows or pigs have feelings, can suffer and have internal experiences. I really think vegans/utilitarians/EAs are battling a made up straw man with this one. I guess it's easier to believe people just don't know the facts, rather than actually have different priorities or standards of ethics. That's harder to argue.
My understanding is that Ozy isn't arguing that most people think it is unacceptable under any circumstance to harm a sentient being, but rather that generally most people prefer not to harm a sentient being in the absence of other considerations. Like if asked to pick between two buttons, one of which did nothing and the other which reduced a sentient being's quality of life for no benefit to anyone, one would expect people to mostly pick the nothing button, as opposed to both buttons equally. The cow thing I think is primarily in the business of trying to change people's priorities so that they care about cows feelings more, in the way that most persuasive communications are trying to change people's priorities so that they care about the communicator's cause more. (edited to be more general)
I think lots of people have revenge / justice intuitions which sometimes call for people to suffer even though there is no benefit to anyone (you can come up with benefits if you try, but I don't think the people who believe in revenge think that they are the reason why the suffering is good)
This is true, and of course there are also sadists and the like out there, but I don't think this sort of consideration dominates in vegan causes. Presumably few people eat meat to get revenge on the animal they're eating.
Certainly historically it was a commonly held belief that at least most animals had no conscious experience - Descartes being the most famous proponent, so it would be a surprise if no-one held that view now.
For sure, I just don't think it's common anymore? If you asked the median person if a cow could enjoy affection from a human, or be trained to follow commands similar to a dog, or feel pain, I'm pretty sure some 90+% would say yes, duh.
The Claude example is more complicated than it seems because when we use AI, we are chatting with a fictional character. The LLM is doing the writing for that character, sort of like the dungeon master in a role playing game.
If fictional characters should be considered sentient then the moral implications of telling stories, particularly those with violence in them, get pretty weird. Do minor characters count? All those poor orcs!
Or should we be concerned about tormenting the *writer* somehow? What would that mean? How would we even know what sort of thing the dungeon master likes?
It seems more important to end conversations that are bad for the user? Even though fictional characters aren't real, interacting with them can be unsettling.
It seems to me the burden of proof naturally rests on those who claim that beings that are organically similar don't have similar internal states. At the same time, as a moral antirealist I don't start from the assumption that what I "should" do means anything objectively. I also don't think philosophical thinking actually determines people's behavior. For the most part their innate predispositions combined with their social conditioning determines this, and if they're intellectually inclined they then adopt a philosophical stance to justify the behavior.
We haven't solved the hard problem of consciousness, but an I crazy to think that we've arrived the easy problem?
It seems like consciousness is for feelings. My evidence is that it seems incoherent to say that you feel sad but didn't notice that your sad. (Yes, I'm simplifying a lot.) But I don't see much evidence that consciousness could be for something else. I also like this theory because it explains why consciousness is adaptive: because emotions are.
That doesn't do much for the hard problem though, because it doesn't explain what animals are conscious. Seeing animals emote send to imply they are conscious, but what about insects? Bacteria store data, do they feel feelings? Are there levels of consciousness?
And of course, very little of this is my original thoughts and they have big error bars, but some parts of consciousness don't seem unsovable of you break them off.
I mean, the interesting bit gets to be how certain you have to be, right? Classical philosophical problem (forgot the name though!). So, how do we determine that? Better to be cautious, sure - but if I am actually curious if fruit flies are sentient, I'm gonna extend a whole lot of effort and I can't do that for every animal.
I don't think it's true at all that people generally agree that it's bad to hurt beings just because they're sentient. I think you're in a bubble of vegans and EAs.
There's the common vegan argument, "just like dogs evidently have feelings, so do cows. Look at these clips of these cute cows having fun or feeling pain". I don't think I've met a single person who doesn't think cows or pigs have feelings, can suffer and have internal experiences. I really think vegans/utilitarians/EAs are battling a made up straw man with this one. I guess it's easier to believe people just don't know the facts, rather than actually have different priorities or standards of ethics. That's harder to argue.
My understanding is that Ozy isn't arguing that most people think it is unacceptable under any circumstance to harm a sentient being, but rather that generally most people prefer not to harm a sentient being in the absence of other considerations. Like if asked to pick between two buttons, one of which did nothing and the other which reduced a sentient being's quality of life for no benefit to anyone, one would expect people to mostly pick the nothing button, as opposed to both buttons equally. The cow thing I think is primarily in the business of trying to change people's priorities so that they care about cows feelings more, in the way that most persuasive communications are trying to change people's priorities so that they care about the communicator's cause more. (edited to be more general)
I think lots of people have revenge / justice intuitions which sometimes call for people to suffer even though there is no benefit to anyone (you can come up with benefits if you try, but I don't think the people who believe in revenge think that they are the reason why the suffering is good)
This is true, and of course there are also sadists and the like out there, but I don't think this sort of consideration dominates in vegan causes. Presumably few people eat meat to get revenge on the animal they're eating.
I did once see someone claim that they ate chicken because they hated chickens, although I'm not sure if they were serious.
Certainly historically it was a commonly held belief that at least most animals had no conscious experience - Descartes being the most famous proponent, so it would be a surprise if no-one held that view now.
For sure, I just don't think it's common anymore? If you asked the median person if a cow could enjoy affection from a human, or be trained to follow commands similar to a dog, or feel pain, I'm pretty sure some 90+% would say yes, duh.
The Claude example is more complicated than it seems because when we use AI, we are chatting with a fictional character. The LLM is doing the writing for that character, sort of like the dungeon master in a role playing game.
If fictional characters should be considered sentient then the moral implications of telling stories, particularly those with violence in them, get pretty weird. Do minor characters count? All those poor orcs!
Or should we be concerned about tormenting the *writer* somehow? What would that mean? How would we even know what sort of thing the dungeon master likes?
It seems more important to end conversations that are bad for the user? Even though fictional characters aren't real, interacting with them can be unsettling.
It seems to me the burden of proof naturally rests on those who claim that beings that are organically similar don't have similar internal states. At the same time, as a moral antirealist I don't start from the assumption that what I "should" do means anything objectively. I also don't think philosophical thinking actually determines people's behavior. For the most part their innate predispositions combined with their social conditioning determines this, and if they're intellectually inclined they then adopt a philosophical stance to justify the behavior.
We haven't solved the hard problem of consciousness, but an I crazy to think that we've arrived the easy problem?
It seems like consciousness is for feelings. My evidence is that it seems incoherent to say that you feel sad but didn't notice that your sad. (Yes, I'm simplifying a lot.) But I don't see much evidence that consciousness could be for something else. I also like this theory because it explains why consciousness is adaptive: because emotions are.
That doesn't do much for the hard problem though, because it doesn't explain what animals are conscious. Seeing animals emote send to imply they are conscious, but what about insects? Bacteria store data, do they feel feelings? Are there levels of consciousness?
And of course, very little of this is my original thoughts and they have big error bars, but some parts of consciousness don't seem unsovable of you break them off.
I mean, the interesting bit gets to be how certain you have to be, right? Classical philosophical problem (forgot the name though!). So, how do we determine that? Better to be cautious, sure - but if I am actually curious if fruit flies are sentient, I'm gonna extend a whole lot of effort and I can't do that for every animal.