this post was submitted on 15 Mar 2024
491 points (95.4% liked)
Technology
59343 readers
5584 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Right, which your only evidence for is "LLMs do often just verbatim spit out things they plagiarized form other sources" and that they aren't trying to prevent this from happening.
Which is demonstrably false, and I'll demonstrate it with as many screenshots/examples you want. You're just wrong about that (at least about GPT). You can also demonstrate it yourself, and if you can prove me wrong I'll eat my shoe.
https://archive.is/nrAjc
Yep here you go. It's currently a very famous lawsuit.
I already talked about that lawsuit here (with receipts) but the long and short of it is, it's flimsy. There's blatant lies, exactly half of their examples omit the lengths they went to for the output they allegedly got or any screenshots as evidence it happened at all, and none of the output they allegedly got was behind a paywall.
Also, using their prompts word for word doesn't give the output they claim they got. Maybe it did in the past, idk, but I've never been able to do it for any copyrighted text personally, and they've shown that they're committed to not letting that stuff happen.
OK but this is why people give a shit when a CEO is cagey about how their magic box works