this post was submitted on 20 Jun 2023
33 points (97.1% liked)

World News

39176 readers
4186 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News [email protected]

Politics [email protected]

World Politics [email protected]


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 1 year ago
MODERATORS
 

cross-posted from: https://vlemmy.net/post/153082

Disclaimer: No images are used in the article.

you are viewing a single comment's thread
view the rest of the comments
[–] LufyCZ 2 points 1 year ago (1 children)

Your comment is based in ignorance of the technology. To have AI spit out images of a specific type, you also have to first feed it imagines of said type.

[–] Protegee9850 7 points 1 year ago* (last edited 1 year ago)

Again, you’re obviously ignorant of how this stuff actually works. That is simply not the case. Otherwise the training set would necessarily need to have images of every type that you hope to generate, an impossibility and which obviously isn’t the case - a very quick look at some of the crazier things people have generated disprove it. Training the model on nude and clothed images of adults and clothed images of children - as others have pointed out - would allow you to generate nude images of children. Could a model have been fine tuned with CSAM - yes; but it’s certainly not a given, and probably not necessary.

The stable diffusion sub has somewhat migrated over to the fediverse. You can find more information about how this stuff actually works beyond your introductory understanding of the concept there.