Blog Post 4: Algorithms 2

Published on:


News Article:

How Generative AI Works and How It Fails

Purpose of the Case Study:

The aim of this case study is to educate everyone on how LLMs like ChatGPT work, and show how much we need to worry about the way we trust and use LLMs like this. It also educates people on the costs of these models.

Question Answers:

  1. How can those who want to change the system go about doing so?

To stop the spread of these models that are trained on copywritten and pirated data, we can start adopting habits of only using models that are trained on legal data, and take note of what kinds of data that is. Spreading the message about this piracy also would help a lot.

  1. Can the market solve the problem, such as through licensing agreements between publishers and AI companies?

There is always going to be a way to pirate these documents, but I think we can go a long way if we start to develop agreements that are above-board and preemptive, so that these companies can access data they want in a legal manner. If the agreements are in place beforehand, it will be much easier to uphold them.

  1. What about copyright law — either interpreting existing law or by updating it?

Current law can only do so much, as piracy is still going to allow them to get around it. If we make it more clear what illegal means, and what AI companies can or cannot do, it will help keep them on track and following legal practices.

  1. What other policy interventions might be helpful?

If we create policy about the types of data allowed to train these giant models, we can potentially help prevent the use of toxic and rude speech in training, which could lead to better outcomes in the models.

My Question & Why:

Q: Is the development of these AI’s morally wrong? Are we going too far by trying to get the bast AI above making sure we do so while understanding what is going on?

I chose this question because I worry that at some point, we will get too far into AI and we won’t be able to get back on track, and we will have predictors creating policies.

Reflection:

It was hard to think much on this topic, as I think the much bigger concern with AI is the amount of bad treatment of people that goes on while training these AIs, and the lack of clarity of what the AI is doing. That being said, it was good to think about this from a different lens, which is why I chose the questions that I did.