this post was submitted on 25 Dec 2023
57 points (58.4% liked)

Technology

59342 readers
5486 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 10 months ago* (last edited 10 months ago)

It's also a matter of scale. FB has 3 billion users and it's all centralized. They are able to police that. Their Trust and Safety team is large (which has its own problems, because they outsource that - but that's another story). The fedi is somewhere around 11M (according to fedidb.org).
The federated model doesn't really "remove" anything, it just segregates the network to "moderated, good instances" and "others".

I don't think most fedi admins are actually following the law by reporting CSAM to the police (because that kind of thing requires a lot resources), they just remove it from their servers and defederate. Bottom line is that the protocols and tools built to combat CSAM don't work too well in the context of federated networks - we need new tools and new reporting protocols.

Reading the Stanford Internet Observatory report on fedi CSAM gives a pretty good picture of the current situation, it is fairly fresh:
https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media