this post was submitted on 22 Jul 2023
171 points (91.7% liked)

Technology

55768 readers
3337 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 37 comments
sorted by: hot top controversial new old
[–] LexiconDexicon 14 points 11 months ago (1 children)
[–] [email protected] 8 points 11 months ago (1 children)

Me Too. I don't understand why aren't publisher like Macmillan suing them?

[–] Ensign_Crab 6 points 11 months ago

Do you think publishers want to pay human authors?

[–] HonorIsDead 12 points 11 months ago (1 children)

I'm conflicted on a lot of this. At the end of the day it seems like these LLMs are simulating human behavior to an extent - exposure to content and generating similar content from that. Could Sarah Silverman be sued by comedians who influenced her comedy style and routines? generally no. I do understand the risk with letting these 'AI' run rampant to displace a huge portion of the creative space which is bad but where should the line be drawn? Is it only the fact they were trained material they dont own people are challenging? What recourse will they have when a LLM is trained on wholly owned IP?

[–] [email protected] 14 points 11 months ago (2 children)

She’s suing for copyright infringement, basically, not the LLM emulating her style.

The LLMs read books from her and many, many others that they didn’t buy, because unauthorized copies had been uploaded to the web (happens to every popular book).

Honestly, I don’t know if she has a case. Going after the people who illegally uploaded her book would be the proper route, but that’s always nearly impossible.

Long and short, LLMs benefited from illegal copies.

[–] [email protected] 1 points 11 months ago

I see a lot of people claim the training model included copyrighted works particularly books because it can provide a summary of it. But it can provide a summary of visual media too, and no one is claiming it’s sitting there watching films.

If the argument is it has quite a detailed knowledge of the book, that’s not convincing either. All it needs is a summary and it can make up the blanks, and get it close enough we can’t tell the difference. Nothing is original.

[–] [email protected] -1 points 11 months ago (1 children)

If you upload an illegal copy of a book and I download it, not realizing or caring that it’s pirated, and then I re-upload it elsewhere, you and I have both committed copyright infringement. This feels like the same thing.

I suspect the case will depend largely on whether the ways that the models were trained using her works qualify as fair use.

[–] islandofcaucasus 3 points 11 months ago (2 children)

Your example is faulty. If you upload an illegal copy of a book and I read it then tell people all about it, I am not committing copyright infringement

[–] kromem 2 points 11 months ago* (last edited 11 months ago)

How did you read it?

Did you access it where it was illegally posted online?

And in so doing, copy it locally in order to read it?

Guess what? According to copyright laws in the US, you just committed copyright infringement.

There's two separate claims.

One, that training is infringement, will hopefully be found to be without merit or it's a slippery slope to the death of free use.

The other, that OpenAI committed copyright infringement by downloading pirated books, is not special in any way with the AI stuff. It doesn't matter how they used it. If they can be found to have downloaded it - even if they then never even opened the file - they are liable for civil damages that can be as high as $150,000 per work if they knew in advance that they were pirating it, and not less than $100 per work no matter if they knew or not.

This is the result of years of lobbying by the various digital rights owners over the past few decades. It's a very broad scope of law and OpenAI should rightfully be concerned if they didn't actually purchase the copyrighted material they used to train.

You can learn and share the knowledge from a book I might illegally upload, but if you are caught having made a copy of the pirated textbook I uploaded, you are liable for damages completely separate from what you did with the knowledge from the books.

[–] [email protected] -1 points 11 months ago

If you use that illegal copy to create a work, then your copy infringes copyright (unless it falls under fair use). LLMs don’t count as people in any legal sense, and training them doesn’t have a legal status comparable to a real person reading books.

[–] [email protected] 12 points 11 months ago

Why is crap from 2 weeks ago being posted like it's new news yet again

Is this one of those bot accounts that aren't marked properly or is OP just after karma (which doesn't exist on this site).

[–] [email protected] 2 points 11 months ago

I'm still waiting on proof for any of these allegations. So far it's just been people suing for the sake of suing and hoping they strike gold. If anyone can point to any evidence at all (read: not hearsay) then I'll gladly review it, but as it stands, its nothing.

[–] feedum_sneedson -3 points 11 months ago

I don't like Sarah Silverman.

load more comments
view more: next ›