this post was submitted on 18 Jun 2023
1318 points (98.5% liked)
Lemmy.World Announcements
29066 readers
2 users here now
This Community is intended for posts about the Lemmy.world server by the admins.
Follow us for server news ๐
Outages ๐ฅ
https://status.lemmy.world
For support with issues at Lemmy.world, go to the Lemmy.world Support community.
Support e-mail
Any support requests are best sent to [email protected] e-mail.
Report contact
- DM https://lemmy.world/u/lwreport
- Email [email protected] (PGP Supported)
Donations ๐
If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.
If you can, please use / switch to Ko-Fi, it has the lowest fees for us
Join the team
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is a great point. The user data needs to be enshrined in such a way that it can be easily moved in a bulk migration without requiring a direct opt-in from every user. While at the same time making it clear how it's being used/kept/sold/not sold/etc.
I'm not against LLMs using the data generated on sites like this to inform useful answers when I ask ChatGPT a question. It genuinely makes AI a better tool, but I feel like the contributors of such content should know how their answers are being used.
LLMs are likely going to scrape no matter the license. I doubt OpenAI got a copyright license from Reddit to ingest it. In fact I'm not even sure they need one if ingestion can be make similar enough to "reading the web site". And so making content CC probably won't affect LLM use of public posts.
Yeah, I understand that screen scraping is a thing, and having a robot just simply read an entire website means there's nothing you can do to stop that from happening short of taking the website offline.
I was talking about in a more structured and proactive way "We know that AI will read our site, and ingest that for LLM, instead of simply accepting that as an inevitability we're extending this offer instead, for a nominal fee we will provide them with the entirety of our sites information with all screen names redacted to protect the identity of the content creators, in exchange for them not simply using AI to read our site."
Or something to that effect. Accept that it will happen, and there's nothing you can really do to stop it. But to package the data in a clean way so that they don't have too, and can simply ingest it into the LLM data sets directly.
What license would be appropriate for that? I've always been interested since I do photography, and it seems like any site like that needs nearly full rights so that they can store and distribute as they see fit so that they can do backups, migration, etc. What license would give those, but keep the full rights of the creator intact?
(I know nothing on the topic, just curious)