this post was submitted on 24 Aug 2023
167 points (94.7% liked)
Technology
60341 readers
4269 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This goes against everything that the NYT preaches in terms of saying that the press is under attack and needs to be protected. AI consumption of news content makes the news more accessible. Their paid articles don’t overlap with what ChatGPT is doing. This is really a bunch of old people getting butt hurt about tech they don’t fully understand.
If you claim to fully understand machine learning technology, you should also understand why it's considered theft by many. Everything that a generative AI churns out is ultimately derived from human works. Some of it is legally unencumbered, but much of it is protected by copyright and integrated into an AI model without the author's permission or knowledge, and reused without attribution.
I have no love for the NYT, but in this, they're right.
I can't say I fully understand how LLMs work (can't anyone??) but I know a little and your comment doesn't seem to understand how they use training data. They don't use their training data to "memorize" sentences, they use it as an example (among billions) of how language works. It's still just an analogy, but it really is pretty close to LLMs "learning" a language by seeing it used over and over. Keeping in mind that we're still in an analogy, it isn't considered "derivative" when someone learns a language from examples of that language and then goes on to write a poem in that language.
Copyright doesn't even apply, except perhaps on extremely fringe cases. If a journalist put their article up online for general consumption, then it doesn't violate copyright to use that work as a way to train a LLM on what the language looks like when used properly. There is no aspect of copyright law that covers this, but I don't see why it would be any different than the human equivalent. Would you really back up the NYT if they claimed that using their articles to learn English was in violation of their copyright? Do people need to attribute where they learned a new word or strengthened their understanding of a language if they answer a question using that word? Does that even make sense?
Here is a link to a high level primer to help understand how LLMs work: https://www.understandingai.org/p/large-language-models-explained-with