this post was submitted on 21 May 2024
72 points (96.2% liked)

Technology

898 readers
221 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

[email protected]
[email protected]


Icon attribution | Banner attribution

founded 7 months ago
MODERATORS
 

cross-posted from: https://lemmy.zip/post/15863526

Steven Anderegg allegedly used the Stable Diffusion AI model to generate photos; if convicted, he could face up to 70 years in prison

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 29 points 1 month ago (17 children)

How are they abuse images if no abuse took place to create them?

[–] PoliticalAgitator 4 points 1 month ago* (last edited 1 month ago) (2 children)

Because they are images of children being graphically raped, a form of abuse. Is an AI generated picture of a tree not a picture of a tree?

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago) (3 children)

No it isn't, not anymore than a drawing of a car is a real car, or drawings of money are real money.

[–] PoliticalAgitator 4 points 1 month ago (1 children)

Material showing a child being sexually abused is child sexual abuse material.

[–] [email protected] 0 points 1 month ago (1 children)

And an AI generated image does not show a child being abused

[–] PoliticalAgitator 3 points 1 month ago (11 children)
load more comments (11 replies)
[–] laughterlaughter 1 points 1 month ago (10 children)

Nobody is saying they're real, and I now see what you're saying.

By your answers, your question is more "at-face-value" than people assume:

You are asking:

"Did violence occur in real life in order to produce this violent picture?"

The answer is, of course, no.

But people are interpreting it as:

"This is a picture of a man being stoned to death. Is this picture violent, if no violence took place in real life?"

To which answer is, yes.

load more comments (10 replies)
[–] [email protected] 0 points 1 month ago (1 children)

Oops you forgot to use logic. As per the comment you're replying to, the more apt analogy would be: is an AI generated picture of a car still a picture of a car.

[–] [email protected] 1 points 1 month ago

That has nothing to do with logic? Its pointing out that both drawings and AI gens are not really the things they might depict

[–] Leg 2 points 1 month ago (4 children)

It's a picture of a hallucination of a tree. Distinguishing real from unreal ought to be taken more seriously given the direction technology is moving.

load more comments (4 replies)
[–] sxt 4 points 1 month ago (3 children)

If the model was trained on csam then it is dependent on abuse

[–] Darrell_Winfield 25 points 1 month ago (1 children)

That's a heck of a slippery slope I just fell down.

If responses generated from AI can be held criminally liable for their training data's crimes, we can all be held liable for all text responses from GPT, since it's being trained on reddit data and likely has access to multiple instances of brigading, swatting, man hunts, etc.

[–] laughterlaughter 2 points 1 month ago

You just summarized the ongoing ethical concerns experts and common folk alike have been talking about in the past few years.

[–] [email protected] 19 points 1 month ago

As I said in my other comment, the model does not have to be trained on CSAM to create images like this.

[–] Jimmyeatsausage 1 points 1 month ago (1 children)

That irrelevant, any realistic depiction of children engaged in sexual activity meets the legal definition of csam. Even using filters on images of consenting adults could qualify as csam if the intent was to make the actors appear underage.

[–] laughterlaughter 2 points 1 month ago (1 children)

I mean... regardless of your moral point of view, you should be able to answer that yourself. Here's an analogy: suppose I draw a picture of a man murdering a dog. It's an animal abuse image, even though no actual animal abuse took place.

[–] [email protected] 2 points 1 month ago (1 children)

Its not though, its just a drawing.

[–] laughterlaughter 2 points 1 month ago (1 children)

Except that it is an animal abuse image, drawing, painting, fiddle, whatever you want to call it. It's still the depiction of animal abuse.

Same with child abuse, rape, torture, killing or beating.

Now, I know what you mean by your question. You're trying to establish that the image/drawing/painting/scribble is harmless because no actual living being suffering happened. But that doesn't mean that they don't depict it.

Again, I'm seeing this from a very practical point of view. However you see these images through the lens of your own morals or points of view, that's a totally different thing.

[–] [email protected] 2 points 1 month ago (1 children)

And when characters are killed on screen in movies, are those snuff films?

[–] laughterlaughter 2 points 1 month ago

No, they're violent films.

Snuff is a different thing, because it's supposed to be real. Snuff films depict violence in a very real sense. So so they're violent. Fiction films also depict violence. And so they're violent too. It's just that they're not about real violence.

I guess what you're really trying to say is that "Generated abuse images are not real abuse images." I agree with that.

But at face value, "Generated abuse images are not abuse images" is incorrect.

load more comments (14 replies)