this post was submitted on 15 Mar 2024
491 points (95.4% liked)

Technology

59352 readers
6426 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] PoliticallyIncorrect -3 points 8 months ago* (last edited 8 months ago) (2 children)

If you read a book or watch a movie and get inspired by it to create something new and different, it's plagiarism and copyright infringement?

If that were the case the majority of stuff nowadays it's plagiarism and copyright infringement, I mean generally people get inspired by someone or something.

[–] [email protected] 8 points 8 months ago (1 children)

There’s a long history of this and you might find some helpful information in looking at “transformative use” of copyrighted materials. Google Books is a famous case where the technology company won the lawsuit.

The real problem is that LLMs constantly spit out copyrighted material verbatim. That’s not transformative. And it’s a near-impossible problem to solve while maintaining the utility. Because these things aren’t actually AI, they’re just monstrous statistical correlation databases generated from an enormous data set.

Much of the utility from them will become targeted applications where the training comes from public/owned datasets. I don’t think the copyright case is going to end well for these companies…or at least they’re going to have to gradually chisel away parts of their training data, which will have an outsized impact as more and more AI generated material finds its way into the training data sets.

[–] [email protected] 2 points 8 months ago (1 children)

How constantly does it spit out copyrighted material? Is there data on that?

[–] [email protected] 2 points 8 months ago

There's more and more research starting to happen on it, but I've seen anywhere from 20% to 60% of responses. Here's a recent study where they explicitly try to coerce LLMs to break copyright: https://www.patronus.ai/blog/introducing-copyright-catcher

I don't have the time to grab them right now, but in many of the lawsuits brought forward against companies developing LLMs, their openings contain some statistics gathered on how frequently they infringed by returning copyrighted material.

[–] [email protected] -5 points 8 months ago* (last edited 8 months ago) (1 children)

You do realize that AI is just a marketing term, right? None of these models learn, have intelligence or create truly original work. As a matter of fact, if people don't continue to create original content, these models would stagnate or enter a feedback loop that would poison themselves with their own erroneous responses.

AIs don't think. They copy with extra steps.