Blog Post 10: Info Blog 2

Published on:


Website:

ARTificial: Why Copyright Is Not the Right Policy Tool to Deal with Generative AI


What is the goal of this case study?

This case study focuses on copyright law, and how it was created to solve a different problem, and expanding it to deal with generative AI training and usage will have adverse effects. She aims to prove that the copyright law is not equipped to handle these cases, and expanding it will stifle innovation and creativity.


Discussion:

Questions:
  1. Is training with unlicensed works a fair use, or an infringing one?
    • The article agrees that the authors of these works should be credited for their work, and the end result of training, while impossible to trace back tho these data sources, is still linked to them. Therefore, we both agree that the use of these training models is not fair use, and can be quite problematic. If you have a model trained on a couple hundred images, you might see remnants of individual images appear in the end result, and you might be able to decide when you are using what input data. But, when you get the sizes that we are using now with GPT-4 and other of these really large models, you end up with something that is just impossible to trace back, and you will never be able to see the individual data in the results. This makes it really hard to decide how to judge what data gets what compensation, and thus we need a new system of crediting the authors of the original work. While AI companies can talk to the individual creators one by one, that would be impossible to do for the large number of data they need. (OpenAI said this in a quote in the essay) A new system for crediting would make it easy to access all the available information, while still maintaining legality in regards to fair use.
  2. Are the outputs generated by GAI original or derivative works?
    • According to the case study, they are more than just derivative works. They say that simplifications of the training process that are presented in court simplify it to a derivative process, but there is enough bore happening that it is a much harder decision. I think that they are still derivative works, because while they can create seemingly new images and results, in reality that is just drawing on the work already done by humans, and they can only summarize and predict based on what we have already learned. Since they cannot fully synthesize new information, I believe that they cannot be original works, and therefore must fall within the umbrella of derivative works.
  3. Should the outputs be entitled to some form of copyright protection? If so, how should we deal with AI authorship?
    • Copyright protection is designed to protect the creators of the art from people stealing there work, and potentially stealing their profit. Since AI doesn’t have any need for profit or recognition, there is no reason (in my mind) that we should give AI any of these protections. The only time I see copyright and restrictions on the use of the outputs of AI is when there is personal information depicted in the output that whoever asked for the content to be created doesn’t want shared.
  4. Should the law treat differently the outputs of AI versus those of human authors? How would we justify such a framework without affecting the principle of “aesthetic neutrality”? How does that fare with the definition of “creativity” as a requisite for copyrights law’s “originality” requirement?
    • Even if the end results are very similar, because the process of creating AI outputs is so different from the human outputs, there has to be different considerations when we apply laws to them. I believe that the law should treat the two very differently because of these differences in the process, and as the author argued in the essay, if we don’t treat them as different, we can run into many issues trying to fit way too much under the same umbrella. If we expand copyright even more, we will end up with a system that regulates the wrong things, and stifles human creativity just to protect AI creativity, which no one needs. We need to prioritize the human side of this, and that means creating a separate system to deal with AI outputs in order to protect the rights of humans. While AI’s work is considered by some to be creative, we shouldn’t compare the two directly under copyright law, and recognize that the creativity by AI is simply different. From the essay, it can fit under two of the three types of creativity (combinational & exploratory), it cannot develop anything transformational, and I believe that that means it is fundamentally a different type of thing, and should be considered completely separate from the human counterpart.

My Question:

Could AI be a lawyer? Or does that have to be a person?

Why?

This is the first time where creativity can be attributed to something other than a human, and that makes everything more complicated. If AI can be creative, and it can do so many other things that we only thought were possible of humans, when can it be a lawyer? what is stopping it now, and what concerns would the public have?


Reflection:

I am weirdly interested in the law, so this topic is really interesting to me. Most times when I’ve heard about lawyers, or read what they have done, it has been to expand or get around rules for their, or their clients, benefit. This i easy to see when you look at all the media surrounding lawyers, like the show SUITS, where everything they do is based on loopholes. It was refreshing to read about lawyers trying to prevent these loopholes, and that truly have the public’s best interest in mind. This article did make me terrified for what the future of AI will hold, but I am glad people are trying to regulate it properly.