Economists be like “it’s okay that this economic arrangement harms some people, we can compensate them financially for the harm” and then not financially compensate the people who were harmed
I also think AI companies should compensate artists for stuff they used in training data. But I suspect, given the large amount of data used, and abundance of alternative sources of images, the equilibrium price would be basically nothing for individual artists. So IMO, that's way way down in the political priority list. Some major media corporations seem to have more of a vested interest in pursuing this, they can fight it out amongst themselves.
If "the explicit intent of all AI companies is to automate human intellectual labor" (all human labor is "human intellectual labor" since the Industrial Revolution, a factory worker isn't selling their superior ability to transform joules into motion), why should the solution be to entrench rent-seeking for only a small minority of Western upper-middle-class people with cultural capital?
"If we treat AIs like people for the sake of whether their media consumption is fair use, I can’t see how you could justify treating AIs differently than humans in other ways."
I don't think this follows. For me it's more useful to think of it in terms of gerunds. "Suffering" is something we try to prevent. "Absorbing other people's work, learning from it, and creating work of your own influenced by it, as long as it's not a direct rip-off" is not something we try to prevent. I don't see any reason to regard this some special human carve-out or privilege – it's just that previously, there haven't been many examples of non-humans doing this. (We wouldn't stop a bird from being influenced by a song it heard; though if someone trained their bird to perfectly replicate a copyrighted song and performed that in public, we might ask them to pay royalties!) Therefore, being allowed to do that is not proof of human-like moral status – that behaviour is allowed by default, regardless of the entity. (Just as suffering is to be averted by default, regardless of the entity.)
If something is *literally* learning (as opposed to "learning" as a behavioral analogy a la a crime sentencing predictive algorithm "learning" which traits are correlated with recidivism), I would not be nearly that certain that they can't suffer. For one thing, suffering evolved as a way of helping animals learn!
Curious what you mean by "literal" learning and why we should think neural networks aren't doing it.
(I also think the move from "training AI to produce art is morally analogous to training humans to produce art" to "AIs are morally equivalent to humans" is too fast)
> If we treat AIs like people for the sake of whether their media consumption is fair use, I can’t see how you could justify treating AIs differently than humans in other ways
The answer to this seems obvious. The reason we treat AI differently from humans in domains involving wellbeing, rights, and being shut down/killed, is because AIs are not conscious beings. They can't be harmed, have their rights violated, or murdered because they don't experience anything. But the analogy between AIs training on art in order to produce output similar to that art, and humans learning from the art they consume and producing similar art, has nothing to do with whether the AI is conscious. The point is that learning skills from art does not violate that art's copyright. I don't see how the fact that we treat AIs differently from humans in other ways has any relevance here.
To me, one key difference between training an AI on a work vs a person learning from the work is that the AI can be run massively in parallel to extract tons of information about the work very quickly, whereas the person has to put in a lot of work to express it. I think this is sufficient to treat sharing an AI trained on the work as more like sharing a copy of the work than like sharing stuff inspired by the work.
This feels a bit like a post-hoc rationalisation to me. If some mega-genius were born who could read a hundred novels in an hour and then write one influenced by them in an afternoon, we wouldn't say, "This doesn't count as original because you didn't sweat over it for long enough." We would judge that by the novel itself.
A human being with novel capacities being born isn't something that anyone has control over, though, and thus is exempt from ethical questions. Intentionally designing technology and making decisions about how to deploy that technology is something that people do have control over and therefore is subject to ethical considerations.
Sure, and I'm not arguing that AIs are exempt from ethical questions. It would be perfectly valid to say, "We should be dubious of AI art, because AI art will have bad outcomes for human artists, wider society etc." I just don't think it makes sense to say, "We should be dubious of AI art, because AI art is plagiaristic by nature, because of [some property that wouldn't actually demonstrate this in the case of a human being.]"
A human can't physically write a novel or paint a detailed scene in an hour. Their hands just don't move that fast, even if they're thinking at an impossible speed. The AI doesn't have that limit and can theoretically turn out a thousand similar images in a day.
that is mostly because we prefer simple laws and accept imperfect handling of edge cases.
if a lot of people like that will be born, it will be significant enough and will cause to reevaluation and probably change of the relevant laws. those laws implicitly assume our current level of ability. when this assumptions breaks, so are the laws.
Some people keep talking as if some actions being allowed when humans do them, but not allowed if someone starts mechanizing them, is a new phenomenon we’ve never dealt with before. It’s not! And it’s not new to come to the conclusion that they should be treated differently, because they’re different in practice. And specifically different *on the basis* that machines can consistently take actions orders of magnitudes faster and more than we need to worry about with humans.
May I buy a ticket from a ticket sales website? Yes. May I buy 20 tickets from a ticket sales website? Probably yes. May I and 20 of my friends each buy a ticket from a ticket sales website? Yes. May I write some code that buys arbitrary numbers of tickets from the ticket sales website one after the other and run it? Probably not. Why not? Well because if I could, we couldn’t have tickets sales websites. But couldn’t a human also buy 1000 tickets one after the other or have 1000 of their friends buy tickets? Yes but there’s a higher barrier to someone doing this so most people won’t so we can in general still have ticket sales websites.
May I eat a lot of food at a buffet? Yes. May I bring my motor that runs on an incineration engine to a buffet and feed it the food? No. May I record the concert I’m going to? No. May I, a theoretical person with an extremely good memory and extremely good singing voice attend a concert and then reproduce it for my friend? Yes. May I take a shell home from the beach with me? Possibly not but it’s likely no one‘s going to go after me for it. May I come to the beach with construction equipment, scoop up a dump truck full of shells, and drive off? No.
A number of things are beyond what an individual can do alone. No one has ever produced animation of any substantial length by themselves. Not even close to the length of a normal film. Even fully producing a short is a heroic task. Drawing all the key frames for a single episode of anime is commonly done, though considered very difficult, but this is very far from producing anime/animation by yourself.
This is troubling to me. You either need to compromise your vision a ton by having many collaborators or pay your servitors (sorry junior animators) to flesh out your story. Most likely both. I hate this.
Note many difficult things can realistically be achieved alone. You can probably personally clear a build and build a house yourself with hand tools. It would take effort but you could probably do it. And not an ugly house. A very nice one! You could probably build a sailboat good enough to sail from SF to Alaska. I believe I'm humanity. But some things are realistically beyond us.
Many other art forms simply require too many assets. You cannot produce them alone. This mostly means you either risk exploiting others or you don't produce the art.
Ai is close to making anime doable. It already can produce art assets of acceptable quality for a board game or simple videogame. Assets can also be produced on the fly, in response to player actions. Right now this isn't the most reliable in terms of quality. But doing anything like without AI is effectively impossible. At least without ai or some other strategy that hasn't been invented yet.
> Economists often suggest that, if a policy makes most people better off but harms a small group severely, you can make everyone better off by compensating the losers.
Yes, but IP laws are already putting a thumb on the scale in artists' favor. It seems off to compensate them for putting somewhat less weight on the thumb.
> An AI isn’t going to plagiarize your article, in the sense that it memorizes your article and regurgitates it. That’s just not how generative AI works.1 When an AI produces a sequence of words, the sequence is almost certainly either new or so common that it’s not copyrightable.
This usually won't happen, but depending on the prompts you can definitely get LLMs to reproduce chunks of their training data. See e.g.: https://arxiv.org/html/2412.06370v1
It's also fairly easy to get AI image generators to spit out copyrighted characters: I asked gemini to "generate a cartoon image of a blue hedgehog" and it spat out sonic the hedgehog with all of his trademark features
The "potential market" factor doesn't quite mean what it literally says in American jurisprudence. The clearest case of this is not relevant to AI: reviews. If I write a negative review of your work and that convinces everyone to not buy it, that literally speaking destroys your potential market but is still clearly fair use.
I don't know if there's jurisprudence on anything similar to what AIs do: unlike negative reviews, they do destroy the market for your work by substituting for it rather than criticizing it, but the actual substitutes in question are not themselves infringing.
I have to disagree with the democratization of art, as a skill, being a net benefit. The very act of overcoming and experiencing the grind, the failures, the successes and alike of creating art is where the value is derived. To narrow it down to a utility, to output-oriented thinking, cheapens the entire endeavor and robs whatever theoretical invalid is proposed from the actual fruits of creating. And if someone cannot possibly enjoy this, in any way whatsoever, if the concern is purely the outcome, then they shouldn't be creating and are basically just shoveling more slop onto the pile and we're better off not enduring whatever they have AI spit out for them.
Economists be like “it’s okay that this economic arrangement harms some people, we can compensate them financially for the harm” and then not financially compensate the people who were harmed
I also think AI companies should compensate artists for stuff they used in training data. But I suspect, given the large amount of data used, and abundance of alternative sources of images, the equilibrium price would be basically nothing for individual artists. So IMO, that's way way down in the political priority list. Some major media corporations seem to have more of a vested interest in pursuing this, they can fight it out amongst themselves.
If "the explicit intent of all AI companies is to automate human intellectual labor" (all human labor is "human intellectual labor" since the Industrial Revolution, a factory worker isn't selling their superior ability to transform joules into motion), why should the solution be to entrench rent-seeking for only a small minority of Western upper-middle-class people with cultural capital?
"If we treat AIs like people for the sake of whether their media consumption is fair use, I can’t see how you could justify treating AIs differently than humans in other ways."
I don't think this follows. For me it's more useful to think of it in terms of gerunds. "Suffering" is something we try to prevent. "Absorbing other people's work, learning from it, and creating work of your own influenced by it, as long as it's not a direct rip-off" is not something we try to prevent. I don't see any reason to regard this some special human carve-out or privilege – it's just that previously, there haven't been many examples of non-humans doing this. (We wouldn't stop a bird from being influenced by a song it heard; though if someone trained their bird to perfectly replicate a copyrighted song and performed that in public, we might ask them to pay royalties!) Therefore, being allowed to do that is not proof of human-like moral status – that behaviour is allowed by default, regardless of the entity. (Just as suffering is to be averted by default, regardless of the entity.)
If something is *literally* learning (as opposed to "learning" as a behavioral analogy a la a crime sentencing predictive algorithm "learning" which traits are correlated with recidivism), I would not be nearly that certain that they can't suffer. For one thing, suffering evolved as a way of helping animals learn!
Curious what you mean by "literal" learning and why we should think neural networks aren't doing it.
(I also think the move from "training AI to produce art is morally analogous to training humans to produce art" to "AIs are morally equivalent to humans" is too fast)
All the learning is in the training process and not by the model itself?
> If we treat AIs like people for the sake of whether their media consumption is fair use, I can’t see how you could justify treating AIs differently than humans in other ways
The answer to this seems obvious. The reason we treat AI differently from humans in domains involving wellbeing, rights, and being shut down/killed, is because AIs are not conscious beings. They can't be harmed, have their rights violated, or murdered because they don't experience anything. But the analogy between AIs training on art in order to produce output similar to that art, and humans learning from the art they consume and producing similar art, has nothing to do with whether the AI is conscious. The point is that learning skills from art does not violate that art's copyright. I don't see how the fact that we treat AIs differently from humans in other ways has any relevance here.
To me, one key difference between training an AI on a work vs a person learning from the work is that the AI can be run massively in parallel to extract tons of information about the work very quickly, whereas the person has to put in a lot of work to express it. I think this is sufficient to treat sharing an AI trained on the work as more like sharing a copy of the work than like sharing stuff inspired by the work.
This feels a bit like a post-hoc rationalisation to me. If some mega-genius were born who could read a hundred novels in an hour and then write one influenced by them in an afternoon, we wouldn't say, "This doesn't count as original because you didn't sweat over it for long enough." We would judge that by the novel itself.
A human being with novel capacities being born isn't something that anyone has control over, though, and thus is exempt from ethical questions. Intentionally designing technology and making decisions about how to deploy that technology is something that people do have control over and therefore is subject to ethical considerations.
Sure, and I'm not arguing that AIs are exempt from ethical questions. It would be perfectly valid to say, "We should be dubious of AI art, because AI art will have bad outcomes for human artists, wider society etc." I just don't think it makes sense to say, "We should be dubious of AI art, because AI art is plagiaristic by nature, because of [some property that wouldn't actually demonstrate this in the case of a human being.]"
A human can't physically write a novel or paint a detailed scene in an hour. Their hands just don't move that fast, even if they're thinking at an impossible speed. The AI doesn't have that limit and can theoretically turn out a thousand similar images in a day.
that is mostly because we prefer simple laws and accept imperfect handling of edge cases.
if a lot of people like that will be born, it will be significant enough and will cause to reevaluation and probably change of the relevant laws. those laws implicitly assume our current level of ability. when this assumptions breaks, so are the laws.
Some people keep talking as if some actions being allowed when humans do them, but not allowed if someone starts mechanizing them, is a new phenomenon we’ve never dealt with before. It’s not! And it’s not new to come to the conclusion that they should be treated differently, because they’re different in practice. And specifically different *on the basis* that machines can consistently take actions orders of magnitudes faster and more than we need to worry about with humans.
May I buy a ticket from a ticket sales website? Yes. May I buy 20 tickets from a ticket sales website? Probably yes. May I and 20 of my friends each buy a ticket from a ticket sales website? Yes. May I write some code that buys arbitrary numbers of tickets from the ticket sales website one after the other and run it? Probably not. Why not? Well because if I could, we couldn’t have tickets sales websites. But couldn’t a human also buy 1000 tickets one after the other or have 1000 of their friends buy tickets? Yes but there’s a higher barrier to someone doing this so most people won’t so we can in general still have ticket sales websites.
May I eat a lot of food at a buffet? Yes. May I bring my motor that runs on an incineration engine to a buffet and feed it the food? No. May I record the concert I’m going to? No. May I, a theoretical person with an extremely good memory and extremely good singing voice attend a concert and then reproduce it for my friend? Yes. May I take a shell home from the beach with me? Possibly not but it’s likely no one‘s going to go after me for it. May I come to the beach with construction equipment, scoop up a dump truck full of shells, and drive off? No.
And so on, and so forth.
A number of things are beyond what an individual can do alone. No one has ever produced animation of any substantial length by themselves. Not even close to the length of a normal film. Even fully producing a short is a heroic task. Drawing all the key frames for a single episode of anime is commonly done, though considered very difficult, but this is very far from producing anime/animation by yourself.
This is troubling to me. You either need to compromise your vision a ton by having many collaborators or pay your servitors (sorry junior animators) to flesh out your story. Most likely both. I hate this.
Note many difficult things can realistically be achieved alone. You can probably personally clear a build and build a house yourself with hand tools. It would take effort but you could probably do it. And not an ugly house. A very nice one! You could probably build a sailboat good enough to sail from SF to Alaska. I believe I'm humanity. But some things are realistically beyond us.
Many other art forms simply require too many assets. You cannot produce them alone. This mostly means you either risk exploiting others or you don't produce the art.
Ai is close to making anime doable. It already can produce art assets of acceptable quality for a board game or simple videogame. Assets can also be produced on the fly, in response to player actions. Right now this isn't the most reliable in terms of quality. But doing anything like without AI is effectively impossible. At least without ai or some other strategy that hasn't been invented yet.
May the future be bright.
> Economists often suggest that, if a policy makes most people better off but harms a small group severely, you can make everyone better off by compensating the losers.
Yes, but IP laws are already putting a thumb on the scale in artists' favor. It seems off to compensate them for putting somewhat less weight on the thumb.
> An AI isn’t going to plagiarize your article, in the sense that it memorizes your article and regurgitates it. That’s just not how generative AI works.1 When an AI produces a sequence of words, the sequence is almost certainly either new or so common that it’s not copyrightable.
This usually won't happen, but depending on the prompts you can definitely get LLMs to reproduce chunks of their training data. See e.g.: https://arxiv.org/html/2412.06370v1
It's also fairly easy to get AI image generators to spit out copyrighted characters: I asked gemini to "generate a cartoon image of a blue hedgehog" and it spat out sonic the hedgehog with all of his trademark features
I tried just now, and got not-sonic.
The "potential market" factor doesn't quite mean what it literally says in American jurisprudence. The clearest case of this is not relevant to AI: reviews. If I write a negative review of your work and that convinces everyone to not buy it, that literally speaking destroys your potential market but is still clearly fair use.
I don't know if there's jurisprudence on anything similar to what AIs do: unlike negative reviews, they do destroy the market for your work by substituting for it rather than criticizing it, but the actual substitutes in question are not themselves infringing.
I have to disagree with the democratization of art, as a skill, being a net benefit. The very act of overcoming and experiencing the grind, the failures, the successes and alike of creating art is where the value is derived. To narrow it down to a utility, to output-oriented thinking, cheapens the entire endeavor and robs whatever theoretical invalid is proposed from the actual fruits of creating. And if someone cannot possibly enjoy this, in any way whatsoever, if the concern is purely the outcome, then they shouldn't be creating and are basically just shoveling more slop onto the pile and we're better off not enduring whatever they have AI spit out for them.