this post was submitted on 24 Jul 2023
211 points (81.7% liked)

Technology

34989 readers
172 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Not a good look for Mastodon - what can be done to automate the removal of CSAM?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 18 points 1 year ago (3 children)

I'm not actually going to read all that, but I'm going to take a few guesses that I'm quite sure are going to be correct.

First, I don't think Mastodon has a "massive child abuse material" problem at all. I think it has, at best, a "racy Japanese style cartoon drawing" problem or, at worst, an "AI generated smut meant to look underage" problem. I'm also quite sure there are monsters operating in the shadows, dogwhistling and hashtagging to each other to find like minded people to set up private exchanges (or instances) for actual CSAM. This is no different than any other platform on the Internet, Mastodon or not. This is no different than the golden age of IRC. This is no different from Tor. This is no different than the USENET and BBS days. People use computers for nefarious shit.

All that having been said, I'm equally sure that this "research" claims that some algorithm has found "actual child porn" on Mastodon that has been verified by some "trusted third part(y|ies)" that may or may not be named. I'm also sure this "research" spends an inordinate amount of time pointing out the "shortcomings" of Mastodon (i.e. no built-in "features" that would allow corporations/governments to conduct what is essentially dragnet surveillance on traffic) and how this has to change "for the safety of the children."

How right was I?

[–] SheeEttin 16 points 1 year ago

Halfway there. The PDF lists drawn 2D/3D, AI/ML generated 2D, and real-life CSAM. It does highlight the actual problem of young platforms with immature moderation tools not being able to deal with the sudden influx of objectional content.

[–] [email protected] 14 points 1 year ago (1 children)

The content in question is unfortunately something that has become very common in recent months: CSAM (child sexual abuse material), generally AI-generated.

AI is now apparently generating entire children, abusing them, and uploading video of it.

Or, they are counting "CSAM-like" images as CSAM.

[–] [email protected] 11 points 1 year ago

Of course they're counting "CSAM-like" in the stats, otherwise they wouldn't have any stats at all. In any case, they don't really care about child abuse at all. They care about a platform existing that they haven't been able to wrap their slimy tentacles around yet.