this post was submitted on 21 May 2024
509 points (95.4% liked)

Technology

60023 readers
3616 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 17 points 7 months ago (1 children)

You make the assumption that the person generating the images also trained the AI model. You also make assumptions about how the AI was trained without knowing anything about the model.

[–] RGB3x3 -2 points 7 months ago* (last edited 7 months ago) (2 children)

Are there any guarantees that harmful images weren't used in these AI models? Based on how image generation works now, it's very likely that harmful images were used to train the data.

And if a person is using a model based on harmful training data, they should be held responsible.

However, the AI owner/trainer has even more responsibility in perpetuating harm to children and should be prosecuted appropriately.

[–] [email protected] 12 points 7 months ago (3 children)

And if a person is using a model based on harmful training data, they should be held responsible.

I will have to disagree with you for several reasons.

  • You are still making assumptions about a system you know absolutely nothing about.
  • By your logic anything born from something that caused suffering from others (this example is AI trained on CSAM) the users of that product should be held responsible for the crime committed to create that product.
    • Does that apply to every product/result created from human suffering or just the things you don't like?
    • Will you apply that logic to the prosperity of Western Nations built on the suffering of indigenous and enslaved people? Should everyone who benefit from western prosperity be held responsible for the crimes committed against those people?
    • What about medicine? Two examples are The Tuskegee Syphilis Study and the cancer cells of Henrietta Lacks. Medicine benefited greatly from these two examples but crimes were committed against the people involved. Should every patient from a cancer program that benefited from Ms. Lacks' cancer cells also be subject to pay compensation to her family? The doctors that used her cells without permission didn't.
    • Should we also talk about the advances in medicine found by Nazis who experimented on Jews and others during WW2? We used that data in our manned space program paving the way to all the benefits we get from space technology.
[–] PotatoKat -1 points 7 months ago

The difference between the things you're listing and SAM is that those other things have actual utility outside of getting off. Were our phones made with human suffering? Probably but phones have many more uses than making someone cum. Are all those things wrong? Yea, but at least good came out of it outside of just giving people sexual gratification directly from the harm of others.

[–] aesthelete 1 points 7 months ago* (last edited 7 months ago)

Are there any guarantees that harmful images weren’t used in these AI models?

Lol, highly doubt it. These AI assholes pretend that all the training data randomly fell into the model (off the back of a truck) and that they cannot possibly be held responsible for that or know anything about it because they were too busy innovating.

There's no guarantee that most regular porn sites don't contain csam or other exploitative imagery and video (sex trafficking victims). There's absolutely zero chance that there's any kind of guarantee.