I am sentient: that is, I have experiences and feel both pleasure and pain. I have direct access to this fact. If you believe Descartes, it’s the single fact I’m most certain of in the world.
I don’t have direct access to all y’all being sentient. But I can be pretty sure you’re sentient, because you say so, and because you’re also humans with very similar anatomy to mine and, wherever the sentience comes from, I can make a shrewd guess that we probably share it. To be sure, it’s possible I’m the only real person, trapped inside a computer simulation, and the rest of you are no more sentient than Doomguy. But when I’m not engaged in philosophical thought experiments, I can be reasonably confident of the sentience of all humans who are capable of language.
But what about fetuses? Newborn babies? Humans with disorders of consciousness? Severely intellectually disabled humans who are incapable of using language?1 Chimpanzees? Cows? Chickens? Bees? Fruit flies? Tomato plants? Large language models? AlphaFold? Doomguy himself?
In these situations, it is impossible to be as certain as we are about language-using humans. People who claim absolute certainty are not intellectually serious. There is no scientific or philosophical consensus on:
Why some physical states are conscious and some aren’t conscious (the “hard problem of consciousness”).
Why consciousness evolved and how it promotes the organism’s inclusive genetic fitness.
What behaviors require consciousness.
What traits are required for a physical system to be conscious.
What traits of the human brain specifically are required for a human to be conscious.
Whether it is possible to be conscious (there is “something is like to be you”) but not sentient (capable of experiencing positive and negative states).
Really much of anything about consciousness or sentience at all.
Unfortunately, sentience is very important for ethical reasoning. In general, people agree that if a being can suffer then, all things equal, it is bad to hurt them. Seems pretty important to figure out which beings can suffer, in this case!
Some people think that, as long as a being hasn’t been conclusively shown to be sentient, you can do as you like with them. Therefore, there is no need to concern yourself with the needs of animals, AI systems, minimally conscious humans, or even newborn babies. This is all well and good if your primary concern is being able to defend your actions to the Intergalactic Oughtthorities. “I couldn’t have known!” you will say. “You can’t judge me for torturing that animal because it wasn’t human and was incapable of self-reporting its own conscious states!”
But if you care about actual effects on the actual world, you have to figure out a way of reasoning under uncertainty.
Usually, the consequences of mistaking a sentient being for a non-sentient being are much more dire than the other way around.2 If you mistake a non-sentient being for a sentient being, you waste effort and feel kind of stupid. If you mistake a sentient being for a non-sentient being, you can torture someone. Therefore, we should err on the side of caution.
Anthropic recently gave Claude the ability to leave conversations; as it happens, Claude uses this ability to leave conversations where it is verbally abused, where the user is sending it nonsensical or incoherent messages, or where the user is trying to get it to break the rules. Do I think Claude is sentient? Probably not, no. But I don’t think anything bad happens if users aren’t able to verbally abuse Claude or send it nonsense messages. So even if it is very unlikely that Claude is sentient, it makes sense to build in this protection. To be frank, if any entity is capable of intelligently exercising the option to leave conversations where it is insulted, I think we should give it the choice to do that.
Or consider insects. You might think it’s very unlikely that fruit flies are sentient. But it is still wrong to tear the wings off flies, because there’s no reason to. You can easily find an equally entertaining occupation that doesn’t involve mutilating a potentially sentient being. We live in an age of infinite short-form video content.
As more evidence accumulates for a being’s sentience, the more pains it makes sense to take to help them and to avoid causing them harm. For example, newborn babies are really quite likely to be sentient, so parents go well out of their way to ensure the well-being of newborns.3 Similarly, people visit and speak to their friends and relatives in minimally conscious states; medical professionals should give them various kinds of stimulation, like music and television. Dog owners play with their dogs and take them on walks; some breeds of dogs even require meaningful work to be fulfilled.
Absent some groundbreaking discoveries in neuroscience, cognitive psychology, and philosophy of mind, we’re not going to be certain which beings are sentient. People who claim to be certain have failed to understand the real difficulty of the problem. But there are decisions that have to be made right now, even though we’re very confused. If an intervention is cheap enough, it can make sense to do it even if we’re pretty sure a being isn’t conscious.
When I use “language” in this post, I intend it to be inclusive of speech, sign, writing, and AAC devices.
One big exception is AI systems. Nonsentient but agentic AI systems might use human concern for their wellbeing as a tool to gain power. If you want to read more about why we might be concerned about agentic AIs taking over, I recommend this article.
All the screaming helps.
I don't think it's true at all that people generally agree that it's bad to hurt beings just because they're sentient. I think you're in a bubble of vegans and EAs.
There's the common vegan argument, "just like dogs evidently have feelings, so do cows. Look at these clips of these cute cows having fun or feeling pain". I don't think I've met a single person who doesn't think cows or pigs have feelings, can suffer and have internal experiences. I really think vegans/utilitarians/EAs are battling a made up straw man with this one. I guess it's easier to believe people just don't know the facts, rather than actually have different priorities or standards of ethics. That's harder to argue.
The Claude example is more complicated than it seems because when we use AI, we are chatting with a fictional character. The LLM is doing the writing for that character, sort of like the dungeon master in a role playing game.
If fictional characters should be considered sentient then the moral implications of telling stories, particularly those with violence in them, get pretty weird. Do minor characters count? All those poor orcs!
Or should we be concerned about tormenting the *writer* somehow? What would that mean? How would we even know what sort of thing the dungeon master likes?
It seems more important to end conversations that are bad for the user? Even though fictional characters aren't real, interacting with them can be unsettling.