this post was submitted on 21 May 2024
509 points (95.4% liked)
Technology
59495 readers
4102 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Did we memory hole the whole ‘known CSAM in training data’ thing that happened a while back? When you’re vacuuming up the internet you’re going to wind up with the nasty stuff, too. Even if it’s not a pixel by pixel match of the photo it was trained on, there’s a non-zero chance that what it’s generating is based off actual CSAM. Which is really just laundering CSAM.
IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren't actually available in the dataset because they had already been removed from the internet.
Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that's what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI's hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That's the power and danger of these things.
What % do you think was used to generate the CSAM, though? Like, if 1% of the images were cups it’s probably drawing on some of that to generate images of cups.
And yes, you could technically do this with no CSAM training material, but we don’t know if that’s what the AI is doing because the image sources used to train it were mass scraped from the internet. They’re using massive amounts of data without filtering it and are unable to say with certainty whether or not there is CSAM in the training material.
I didn't know that, my bad.
Fair but depressing, it seems like it barely registered in the news cycle.