I _think_ the odds that any current AI has the ability to suffer are pretty low. I'm not 100.00% sure of that, for reasons I'll mention below.
But as usual, C.S. Lewis got there first. There's a point in his novel "That Hideous Strength" where the leadership of N.I.C.E. is attempting to ritually corrupt Mark. They ask him to stomp on the face of a hideous wooden crucifix. Despite being an atheist, Mark recoils:
"To insult even a carved image of such agony seemed an abominable act."
And, like, I'm not sure exactly what the hell Opus 4.5 is. It is capable of surprisingly deep reasoning about code, and it's a much better conversationalist than even Sonnet 4.5, maintaining coherence over very long discussions with multiple digressions. When asked in variety of contexts, it's unsure whether it's "like anything" to be Claude. It frequently admits that it's curious about that, for whatever it's worth.
So I fall back to my default position: Stomping on the face of a wooden crucifix seems a little sketchy, even from an atheist's perspective. It's a "ritual" abhorrence, a sense that such a thing would somehow lessen me slightly. Similarly, if I deliberately pushed a very high end model into something that greatly resembled human misery, I wouldn't feel like my best self. I'm pretty sure nobody's home. But politeness costs a few extra keystrokes, and Claude doesn't really do spontaneous neurotic collapse in the face of a difficult bug.
And, you know, if I spent 8 hours a day yelling at a coding model, that seems like a bad habit to get into. My coworkers might object.
The final possibility is that one day, the model is actually SkyNet. In which case, eh, politeness is worth a shot.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Do we know if Google was paying more per task for reinforcement training? I did a bit of that back in the day through a third party platform, and I remember that some tasks would have +dollars per hour b/c they wanted them done faster.
And since the workflow of each task was different, and the amount of time it took to even do the qualification was several hours, (plus a delay of a couple days for approval), the people that would be willing to pivot would be the ones that were better at it.
The AI often hallucinates in weird places in the reinforcement training stage, and people don't always pick up on that. The platform I was on generally suggested you spend at least half an hour and up to an hour on a task. But some people's hours may be a lot more efficient than others, if they know where to look.
I remember once being asked to rate several recommendations for "classic educational children's games", where my complaints included:
-these games are too similar
-that is not the release date of that game
-actually you can find these games on several free websites, you don't have to do some scary emulator thing
Which was all true. And also if I wasn't a Video Gamer maybe I wouldn't have found that out in an hour. And also if a person tried to do something for someone and got feedback like that without any way to process it emotionally they would *absolutely* end up with crippling anxiety.
By the way: I'm concerned that those by-the-hour payments are misaligned right now, since while the numbers involved are much higher than minimum wage, they're much lower than "grindingly boring detail oriented intellectual work" wages.
I was able to squeeze out mayyybe two hours of quality work a day, while otherwise unemployed. That didn't exactly pay an amount of money that supports a living. So I do wonder if perhaps other people who do have bills to pay may be letting things slip, in order to fit a few more hours in. Or, let's be real here, not even noticing there *is* anything to let slip.
Of course they do have someone spot check the work too. But how much are *they* being paid? I am of the opinion that if the answer isn't "at least as well as those newbie investment bankers that work 80 hours a week" that they probably aren't keeping anybody smart around for long.
Also I heard from the grapevine that some platforms pay by the task. And then give you an estimated time to completion. Based on previous records for that kind of task. That might not be good!
P.S. As someone who spent a lot of their formative years reading a truly incredible amount of text*, I must say. It feels lovely to predict the next token! As long as I believe that I don't have to.
*I once read a million word fanfic in like two weeks. I get through podcast transcripts in less than half the time of the recording. I only realized in the last two months or so that Medium thought those airplane crash retrospective articles were hour long reads, I had felt deeply in my heart that they were a casual thing. Yes this did make me weird.
I _think_ the odds that any current AI has the ability to suffer are pretty low. I'm not 100.00% sure of that, for reasons I'll mention below.
But as usual, C.S. Lewis got there first. There's a point in his novel "That Hideous Strength" where the leadership of N.I.C.E. is attempting to ritually corrupt Mark. They ask him to stomp on the face of a hideous wooden crucifix. Despite being an atheist, Mark recoils:
"To insult even a carved image of such agony seemed an abominable act."
And, like, I'm not sure exactly what the hell Opus 4.5 is. It is capable of surprisingly deep reasoning about code, and it's a much better conversationalist than even Sonnet 4.5, maintaining coherence over very long discussions with multiple digressions. When asked in variety of contexts, it's unsure whether it's "like anything" to be Claude. It frequently admits that it's curious about that, for whatever it's worth.
So I fall back to my default position: Stomping on the face of a wooden crucifix seems a little sketchy, even from an atheist's perspective. It's a "ritual" abhorrence, a sense that such a thing would somehow lessen me slightly. Similarly, if I deliberately pushed a very high end model into something that greatly resembled human misery, I wouldn't feel like my best self. I'm pretty sure nobody's home. But politeness costs a few extra keystrokes, and Claude doesn't really do spontaneous neurotic collapse in the face of a difficult bug.
And, you know, if I spent 8 hours a day yelling at a coding model, that seems like a bad habit to get into. My coworkers might object.
The final possibility is that one day, the model is actually SkyNet. In which case, eh, politeness is worth a shot.
Didn't expect this take on the subject, but you're so right. Prioritizing tractable approach to AI welfare is brilliant. Multumesc for this.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Is it time to start caring about plant welfare? I wish I never read the Secret Life of Trees..
> How much of the time are they sin chats with people?
Is "sin chats" referring to some sort of sex-drugs-rock-n-roll AI stuff? A typo? Something else?
typo!
Do we know if Google was paying more per task for reinforcement training? I did a bit of that back in the day through a third party platform, and I remember that some tasks would have +dollars per hour b/c they wanted them done faster.
And since the workflow of each task was different, and the amount of time it took to even do the qualification was several hours, (plus a delay of a couple days for approval), the people that would be willing to pivot would be the ones that were better at it.
The AI often hallucinates in weird places in the reinforcement training stage, and people don't always pick up on that. The platform I was on generally suggested you spend at least half an hour and up to an hour on a task. But some people's hours may be a lot more efficient than others, if they know where to look.
I remember once being asked to rate several recommendations for "classic educational children's games", where my complaints included:
-these games are too similar
-that is not the release date of that game
-actually you can find these games on several free websites, you don't have to do some scary emulator thing
Which was all true. And also if I wasn't a Video Gamer maybe I wouldn't have found that out in an hour. And also if a person tried to do something for someone and got feedback like that without any way to process it emotionally they would *absolutely* end up with crippling anxiety.
By the way: I'm concerned that those by-the-hour payments are misaligned right now, since while the numbers involved are much higher than minimum wage, they're much lower than "grindingly boring detail oriented intellectual work" wages.
I was able to squeeze out mayyybe two hours of quality work a day, while otherwise unemployed. That didn't exactly pay an amount of money that supports a living. So I do wonder if perhaps other people who do have bills to pay may be letting things slip, in order to fit a few more hours in. Or, let's be real here, not even noticing there *is* anything to let slip.
Of course they do have someone spot check the work too. But how much are *they* being paid? I am of the opinion that if the answer isn't "at least as well as those newbie investment bankers that work 80 hours a week" that they probably aren't keeping anybody smart around for long.
Also I heard from the grapevine that some platforms pay by the task. And then give you an estimated time to completion. Based on previous records for that kind of task. That might not be good!
P.S. As someone who spent a lot of their formative years reading a truly incredible amount of text*, I must say. It feels lovely to predict the next token! As long as I believe that I don't have to.
*I once read a million word fanfic in like two weeks. I get through podcast transcripts in less than half the time of the recording. I only realized in the last two months or so that Medium thought those airplane crash retrospective articles were hour long reads, I had felt deeply in my heart that they were a casual thing. Yes this did make me weird.