World News
A community for discussing events around the World
Rules:
-
Rule 1: posts have the following requirements:
- Post news articles only
- Video links are NOT articles and will be removed.
- Title must match the article headline
- Not United States Internal News
- Recent (Past 30 Days)
- Screenshots/links to other social media sites (Twitter/X/Facebook/Youtube/reddit, etc.) are explicitly forbidden, as are link shorteners.
-
Rule 2: Do not copy the entire article into your post. The key points in 1-2 paragraphs is allowed (even encouraged!), but large segments of articles posted in the body will result in the post being removed. If you have to stop and think "Is this fair use?", it probably isn't. Archive links, especially the ones created on link submission, are absolutely allowed but those that avoid paywalls are not.
-
Rule 3: Opinions articles, or Articles based on misinformation/propaganda may be removed. Sources that have a Low or Very Low factual reporting rating or MBFC Credibility Rating may be removed.
-
Rule 4: Posts or comments that are homophobic, transphobic, racist, sexist, anti-religious, or ableist will be removed. “Ironic” prejudice is just prejudiced.
-
Posts and comments must abide by the lemmy.world terms of service UPDATED AS OF 10/19
-
Rule 5: Keep it civil. It's OK to say the subject of an article is behaving like a (pejorative, pejorative). It's NOT OK to say another USER is (pejorative). Strong language is fine, just not directed at other members. Engage in good-faith and with respect! This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.
Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.
-
Rule 6: Memes, spam, other low effort posting, reposts, misinformation, advocating violence, off-topic, trolling, offensive, regarding the moderators or meta in content may be removed at any time.
-
Rule 7: We didn't USED to need a rule about how many posts one could make in a day, then someone posted NINETEEN articles in a single day. Not comments, FULL ARTICLES. If you're posting more than say, 10 or so, consider going outside and touching grass. We reserve the right to limit over-posting so a single user does not dominate the front page.
We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.
All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.
Lemmy World Partners
News [email protected]
Politics [email protected]
World Politics [email protected]
Recommendations
For Firefox users, there is media bias / propaganda / fact check plugin.
https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/
- Consider including the article’s mediabiasfactcheck.com/ link
view the rest of the comments
As a UK citizen, I'm ashamed of my government.
I am firmly against child abusers, but AI images don't harm anyone and are a safe and harmless way for pedophiles to fulfil their urges, which they cannot control.
Where does the training data come from to create indecent images of children?
It doesn't need csam data for training, it just needs to know what a boob looks like, and what a child looks like. I run some sdxl-based models at home and I've observed it can be difficult to avoid more often than you'd think. There are keywords in porn that blend the lines across datasets ("teen", "petite", "young", "small" etc). The word "girl" in particular I've found that if you add that to basically any porn prompt gives you a small chance of inadvertently creating the undesirable. You have to be really careful and use words like "woman", "adult", etc instead to convince your image model not to make things that look like children. If you've ever wondered why internet-based porn generators are on super heavy guardrails, this is why.
Thanks for the reply, it's given me a good idea of what's most likely happening :)
It's a shame that the rest of the thread went to shit, but unfortunately it's an emotional topic, and brings out emotional responses
Always happy to try and productively add to someone's learning.
From a few months ago
https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
I'm not going to say that csam in training sets isn't a problem. However, even if you remove it, the model remains largely the same, and its capabilities remain functionally identical.
The whole point of diffusion models is that you can generate new concepts using training data. Models trained on any nsfw images can combine those concepts with any of its non-nsfw concepts. Of course, that's not to say there isn't CSAM in any training data, because there objectively has been in the past, but there doesn't need to be any to generate it.
Thanks for the reply, that makes a lot of sense :)
Thanks for not being a dick! I aim to inform
Ai is able to fill in the last field in a table like "Old / young" vs "Clothed / naked" when given three of the four fields.
Csam is in the training data. From a few months ago
https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
Please reiterate your statement but instead using the "goose chase meme" format.
From a few months ago
https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
I don't know what the right answer is, but we provide substitutes for drug addicts to help them overcome their addictions. Methadone and nicotine patches come to mind.
Is it completely inconceivable that a similar tool would help with harmful sexual desires?
I was listening to a podcast on moral philosophy (wouldn't you wanna be as cool as me??), and one suggestion that's stuck with me was the morality of, trigger warning,
spoiler
'life like child sex robots'.As in, would we as a society want to permit such things, knowing that they could potentially save humans from actual harm if they offer an outlet that scratches an itch? On the other hand, would they bring forth more harmful desires in a greater number of potential perpetrators, leading to even more harm?
Anyway, I'm glad it's not my job to contemplate such disturbing topics.
Permit is a difficult thing. There is always someone willing to make such things profit. Yay, capitalism!I
I have no idea where they come from but I've dove deep enough into the darker web to know people possess such items. I have no doubt that anyone willing to dive in pursuit of such an item would have little problem in finding one.
I recall reading an article about the creators of the real doll and how they received such requests but refused them. They also claimed they had requests for animals which they didn't think were serious and also refuse.
I'm sure those exist as well and I would rather those exist than for people to harm real animals.
The underlying behavior is the problem though. While substitutions could potentially be made available, this isn't the same as drug addiction. The reality is that while a pedo could be satiated with a drop in replacement for a time (and possibly indefinitely), there is a very real risk that after a while they're not satisfied with pretending and could quickly jump to the real thing in a split second. The due course, in my mind, is either modifying the depraved behavior or removing the person from society. While drug addiction can be a vice that doesn't inflict harm on the rest of society (ie an addict is potentially able to silo their use from the rest of society), pedophilia is always a crime with a victim. The entire purpose of the situation is ensuring that no one becomes a victim of sex crimes, especially minors, and it is too great a risk to allow in any form.
Let's compare it with adult pornography. Does the consumption of adult pornography remove the desire to have sex with another adult in the long term? Or does it reinforce the sexually desirable characteristics of adults?
Well considering porn addiction can often lead to lower libido and decreased performance with a partner, sorta yeah.
Current mental help methods for pedophiles include acceptance of their desires as normal, just not something to act on IRL.
It does not prohibit any fictional materials including children, nor can it make someone uninterested in children.
By stripping away safe outlets, we may come at risk of these people increasingly turning to real CSAM, which is way more harmful.
What do you know of the current methods. Where did this information come from? I’d really like to see it. You spoke with such knowledge, you must have the data to make it up, right?
The approach was originally pioneered by the Prevention Project Dunkelfeld, and later adopted for a wider use in Germany, Europe and abroad.
Studies have shown that this approach does work, which led to its widespread adoption and popularization.
You can read details of the treatments coming out of this research here.
Beware of the corporate greed and prepare good old Sci-hub to read sources in full text if you want to.
I am not aware of the research in this area although I have a minor psych background so that's interesting and makes sense in hindsight. My understanding is that a large part of the compulsion is driven by guilt, shame, feelings of worthlessness, prior victimizations of themselves, etc. Essentially trying to gain a sense of power by taking it from those more vulnerable than them, like an abuser beating their spouse because someone at work put them down. So it makes sense to encourage a sense of power and lessen any sense of guilt and shame.
On a side note, I can't imagine having their name plastered everywhere does anything but trigger the compulsion to re-offend. Maybe when we advance more as a society, we can separate individuals into categories of has-offended and child-attracted, with the former being on a public danger list and the latter having frequent discreet visits by social workers and mandatory counselors, etc. To lessen the chance of offense and possibly start helping them before they get to the offense stage (those that were ever going to offend.)
I'm pretty sure that this is not true. I'd love to see sources.
There was some research before the ongoing AI-panic, focusing on hentai instead. As it is as "harmless" as the AI-generated content.
And I do recall that at the time there were voices in research making the point that the consumption of material did not have correlation with actually reducing the urges. So this seems highly unlikely.
Upon proper search, I agree I must have been too rushed in decisions as the topic of the influence of computer-generated or drawn CSAM on escalation in offending still seems to be a matter of speculations, with severe lack of sources on both sides (correct me if I'm wrong).
Both sides draw from singular testimonials.
Still, I will remove the notion on science. Thank you for issuing the correction.
P.S. This paper does some job of evaluating both sides, although has its own strong bias not based on presented evidence. Still, it is useful to get some basic overview of the current state of affairs.
It brings me to ask the question if lolicon could be their next target?