California legislators have begun debating a bill (A.B. 412) that would require AI developers to track and disclose every registered copyrighted work used in AI training. At first glance, this might sound like a reasonable step toward transparency. But it’s an impossible standard that could crush...
I recommend reading this article by Cory Doctorow, and this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries.
The first article has some good points taken very literally. I see how they arrive at some conclusions. They break it down step by step very well. Copyright is merky as hell, I’ll give them that, but the final generated product is what’s important in court.
The second paper, while well written, is more of a press piece. But they do touch on one important part relevant to this conversation:
The LCA principles also make the careful and critical distinction between input to train an LLM, and output—which could potentially be infringing if it is substantially similar to an original expressive work.
This is important because a prompt “create a picture of ____ in the style of _____” can absolutely generate output from specific sampled copyright material, which courts have required royalty payments in the past. An LLM can also sample a voice of a voice actor so accurately as to be confused with the real thing. There have been Union strikes over this.
All in all, this is new territory, part of the fun of evolving laws. If you remove the generative part of AI, would that be enough?
The funny part is most of the headlines want you to believe that using things without permission is somehow against copyright. When in reality, fair use is a part of copyright law, and the reason our discourse isn’t wholly controlled by mega-corporations and the rich. It’s sad watching people desperately trying to become the kind of system they’re against.
Fair use is based on a four-factor analysis that considers the purpose of the use, the nature of the copyrighted work, the amount used, and the effect on the market for the original work.
It is ambiguous, and limited, tested on a case-by-case basis which makes this time in Copyright so interesting.
I recommend reading this article by Cory Doctorow, and this one by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries.
The first article has some good points taken very literally. I see how they arrive at some conclusions. They break it down step by step very well. Copyright is merky as hell, I’ll give them that, but the final generated product is what’s important in court.
The second paper, while well written, is more of a press piece. But they do touch on one important part relevant to this conversation:
This is important because a prompt “create a picture of ____ in the style of _____” can absolutely generate output from specific sampled copyright material, which courts have required royalty payments in the past. An LLM can also sample a voice of a voice actor so accurately as to be confused with the real thing. There have been Union strikes over this.
All in all, this is new territory, part of the fun of evolving laws. If you remove the generative part of AI, would that be enough?
The funny part is most of the headlines want you to believe that using things without permission is somehow against copyright. When in reality, fair use is a part of copyright law, and the reason our discourse isn’t wholly controlled by mega-corporations and the rich. It’s sad watching people desperately trying to become the kind of system they’re against.
It is ambiguous, and limited, tested on a case-by-case basis which makes this time in Copyright so interesting.