this post was submitted on 23 Jun 2023
64 points (100.0% liked)

Fediverse

28555 readers
922 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to [email protected]!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 2 years ago
MODERATORS
 

I was recently thinking about how amazing it is that with this decentralized community we would have no censorship from big corporations and then I asked myself: what about illegal content? The kind of content that really should not be shared? As an example, what if someone creates a Lemmy instance and starts creating a community around CP? Or human trafficking? How do we deal with it? I know that instances can choose with whom they can access the content, so if most popular instances blacklist that "illegal" instance its content wouldn't be easily visible, but it would still be in the Fediverse. Also, will all popular instances have to be quick to blacklist these "illegal" instances? Isn't that a little to difficult? If we go the other way, where they create a whitelist, wouldn't that harm small legit instances? Is there a plan to fight off illegal content?

you are viewing a single comment's thread
view the rest of the comments
[–] fubo 18 points 1 year ago* (last edited 1 year ago)

As usual, it's the same as email. There will need to be various sorts of spam filtering developed in order to keep the platform usable. In the meantime — if you see it, report it and delete it.

Suppose you open up your email and you see that you've received a piece of spam that contains CSAM (CP). You have not committed a crime — but you also mustn't keep it. So you report the spam to your email provider, and you delete it from your mailbox. If you're very diligent maybe you report it to NCMEC.

Suppose you run an email server. You're aware of the existence of spam (alas!) and you do your best to block spam using various technologies ranging from DNSBLs to ML classifiers. If someone on the Internet sends spam containing CSAM to a user on your server, you didn't send it; they did. The sender committed a crime. Your spam filters just didn't catch that particular instance. So when your user reports it to you, you improve your spam filters. And you delete it.

Suppose you run an email server. Your spam filters might include a reputation score for other email servers. When your filters notice that a large fraction of the messages from a particular server are spam, they switch to automatically block all mail from that server. Then even if that server tries to send spam to your users, the offending messages never even hit your server's disk.

Expect that as this platform matures, it will need many of the same sorts of spam-mitigation technology that email and other federated services have used.


I'm repeating "and you delete it" once again because that's important. You mustn't retain copies of illegal files even as training data for your spam classifiers. The big email providers & social media companies go to a bunch of effort to keep data about CSAM files, without having to keep the actual files.