Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
view the rest of the comments
Quoting this comment from the HN thread:
As the commentor points out, I could recreate this result using a smaller offline model and an excerpt from the Wikipedia page for the book.
You are treating publicly available information as free from copyright, which is not the case. Wikipedia content is covered by the Creative Commons Attribution-ShareAlike License 4.0. Images might be covered by different licenses. Online articles about the book are also covered by copyright unless explicitly stated otherwise.
My understanding is that the copyright applies to reproductions of the work, which this is not. If I provide a summary of a copyrighted summary of a copyrighted work, am I in violation of either copyright because I created a new derivative summary?
Not a lawyer so I can't be sure. To my understanding a summary of a work is not a violation of copyright because the summary is transformative (serves a completely different purpose to the original work). But you probably can't copy someone else's summary, because now you are making a derivative that serves the same purpose as the original.
So here are the issues with LLMs in this regard:
That's either overfitting and means the training went wrong, or plain chance. Gazillions of bonkers court cases over "did the artist at some point in their life hear a particular melody" come to mind. Great. Now that's flanked with allegations of eidetic memory we have reached peak capitalism.
Don't all three of those points apply to humans?
Aren't summaries and reviews covered under fair use? Otherwise Newspapers have been violating copyrights for hundreds of years.
Summarising stuff is literally all ML models do. It's their bread and butter: See what's out there and categorise into a (ridiculously) high-dimensional semantic space. Put a bit flippantly: You shouldn't be surprised if it's giving you the same synopsis for both Dances with Wolves and Avatar because they are indeed very similar stories, occupying the same approximate position in that space. If you don't ask for a summary but a full screenplay it's going to come up with random details to fill in the details it ignored while categorising, again the results will look similar if you squint right because, again, they're at the core the same story.
It's not even really necessary for those models to learn the concept of "summary" -- only that, in a prompt, it means "write a 200 word output instead of a 20000 word one". It will produce a longer or shorter description of that position in space, hallucinating more or less details. It's really no different than police interviewing you as a witness to a car accident and having to pay attention to not prompt you wrong, including assuming that you saw certain things or you, too, will come up with random bullshit (and believe it): It's all a reconstructive process, generating a concrete thing from an abstract representation. There's really no art to summary it's inherent in how semantic abstraction works.