this post was submitted on 24 Jul 2023
211 points (81.7% liked)

Technology

34989 readers
172 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Not a good look for Mastodon - what can be done to automate the removal of CSAM?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 35 points 1 year ago (3 children)

This is one of the reasons I'm hesitant to start my own instance - the moderation load expands exponentially as you scale, and without some sort of automated tool to keep CSAM content from being posted in the first place, I can only see the problem increasing. I'm curious to see if anyone knows of lemmy or mastodon moderation tools that could help here.

That being said, it's worth noting that the same Standford research team reviewed Twitter and found the same dynamic in play, so this isn't a problem unique to Mastodon. The ugly thing is that Twitter has (or had) a team to deal with this, and yet:

“The investigation discovered problems with Twitter's CSAM detection mechanisms and we reported this issue to NCMEC in April, but the problem continued,” says the team. “Having no remaining Trust and Safety contacts at Twitter, we approached a third-party intermediary to arrange a briefing. Twitter was informed of the problem, and the issue appears to have been resolved as of May 20.”

Research such as this is about to become far harder—or at any rate far more expensive—following Elon Musk’s decision to start charging $42,000 per month for its previously free API. The Stanford Internet Observatory, indeed, has recently been forced to stop using the enterprise-level of the tool; the free version is said to provide read-only access, and there are concerns that researchers will be forced to delete data that was previously collected under agreement.

So going forward, such comparisons will be impossible because Twitter has locked down its API. So yes, the Fediverse has a problem, the same one Twitter has, but Twitter is actively ignoring it while reducing transparency into future moderation.

[–] [email protected] 16 points 1 year ago (1 children)

If you run your instance behind cloudlare, you can enable the CSAM scanning tool which can automatically block and report known CSAMs to authorities if they're uploaded into your server. This should reduce your risk as the instance operator.

https://developers.cloudflare.com/cache/reference/csam-scanning/

[–] [email protected] 6 points 1 year ago (1 children)

Sweet - thanks - that's a brilliant tool. Bookmarked.

[–] [email protected] 9 points 1 year ago* (last edited 1 year ago) (1 children)

I think the common sense solution is creating instances for physically local communities (thus keeping the moderation overhead to a minimum) and being very judicious about which instances you federate your instance with.

That being said, It's only a matter of time before moderation tools are created for streamlining the process.

[–] [email protected] 8 points 1 year ago

My instance is for members of a certain group, had to email the owner a picture of your card to get in. More instances should exist like that. General instances are great but it's nice knowing all the people on my local are in this group too.

[–] [email protected] 2 points 1 year ago (1 children)

@Arotrios @corb3t

They want to intimidate you with #ForTheChildren

Sounds like they succeeded

[–] [email protected] 2 points 1 year ago (1 children)

Nah, not intimidated. More that I ran a sizeable forum in the past and I know what what a pain in the ass this kind of content can be to deal with. That's why I was asking about automated tools to deal with it. The forum I ran got targeted by a bunch of Turkish hackers, and their one of their attack techniques involved a wave of spambot accounts trying to post crap content. I wasn't intimidated (fought them for about two years straight), but by the end of it I was exhausted to the point where it just wasn't worth it anymore. An automated CSAM filter would have made a huge difference, but this was over a decade ago and those tools weren't around.

[–] [email protected] 1 points 1 year ago (1 children)

@Arotrios @corb3t

Totally reasonable. If (when) I create my own instance it will be very locked down re who I allow to join

[–] corb3t 1 points 1 year ago

Not sure why you’re continually @ replying to me? Is discussion around activitypub content moderation an issue for you?