this post was submitted on 28 Aug 2023
1751 points (97.9% liked)

Lemmy.World Announcements

29099 readers
25 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news 🐘

Outages πŸ”₯

https://status.lemmy.world

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to [email protected] e-mail.

Report contact

Donations πŸ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 2 years ago
MODERATORS
 

Hello everyone,

We unfortunately have to close the !lemmyshitpost community for the time being. We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.

We keep working on a solution, we have a few things in the works but that won't help us now.

Thank you for your understanding and apologies to our users, moderators and admins of other instances who had to deal with this.

Edit: @[email protected] the moderator of the affected community made a post apologizing for what happened. But this could not be stopped even with 10 moderators. And if it wasn't his community it would have been another one. And it is clear this could happen on any instance.

But we will not give up. We are lucky to have a very dedicated team and we can hopefully make an announcement about what's next very soon.

Edit 2: removed that bit about the moderator tools. That came out a bit harsher than how we meant it. It's been a long day and having to deal with this kind of stuff got some of us a bit salty to say the least. Remember we also had to deal with people posting scat not too long ago so this isn't the first time we felt helpless. Anyway, I hope we can announce something more positive soon.

you are viewing a single comment's thread
view the rest of the comments
[–] dragontamer 149 points 1 year ago (44 children)

Not that I'm familiar with Rust at all, but... perhaps we need to talk about this.

The only thing that could have prevented this is better moderation tools. And while a lot of the instance admins have been asking for this, it doesn’t seem to be on the developers roadmap for the time being. There are just two full-time developers on this project and they seem to have other priorities. No offense to them but it doesn’t inspire much faith for the future of Lemmy.

Lets be productive. What exactly are the moderation features needed, and what would be easiest to implement into the Lemmy source code? Are you talking about a mass-ban of users from specific instances? A ban of new accounts from instances? Like, what moderation tool exactly is needed here?

[–] Agamemnon 116 points 1 year ago (8 children)

Speculating:

Restricting posting from accounts that don't meet some adjustable criteria. Like account age, comment count, prior moderation action, average comment length (upvote quota maybe not, because not all instances use it)

Automatic hash comparison of uploaded images with database of registered illegal content.

[–] [email protected] 11 points 1 year ago (5 children)

What are the chances of a hash collision in this instance? I know accidental hash collisions are usually super rare, but with enough people it'd probably still happen every now and then, especially if the system is designed to detect images similar to the original illegal image (to catch any minor edits).

Is there a way to use multiple hashes from different sources to help reduce collisions? For an example, checking both the MD5 and SHA256 hashes instead of just one or the other, and then it only gets flagged if both match within a certain degree.

[–] TsarVul 26 points 1 year ago (1 children)

Traditional hash like MD5 and SHA256 are not locality-sensitive. Can't be used to detect match with certain degree. Otherwise, yes you are correct. Perceptual hashes can create false positive. Very unlikely, but yes it is possible. This is not a problem with perfect solution. Extraordinary edge cases must be resolved on a case by case basis.

And yes, simplest solution must be implemented first always. Tracking post reputation, captcha before post, wait for account to mature before can post, etc. The problem is that right now the only defense we have access to are mods. Mods are people, usually with eyeballs. Eyeballs which will be poisoned by CSAM so we can post memes and funnies without issues. This is not fair to them. We must do all we can, and if all we can includes perceptual hashing, we have moral obligation to do so.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

Something I thought about that might be helpful is if mods had the ability to add a post delay on a community basis. Basically, the delay would be moderator adjustable, but only moderators and admins would be able to see the post for X number of minutes after being posted. It'd help for situations like ongoing attacks where you don't necessarily want to have to manually approve posts, but you want a chance to catch any garbage before the post goes public.

Edit: and yeah, one of the reasons I'm aware that perceptual hashes can have collisions is because a number of image viewers/cataloging tools like xnview mp or hydrus network use hash collisions to help identify duplicate images. However, I've seen collisions between unrelated images when lowering the sensitivity which is why I was wondering if there was a way to use multiple hashing algorithms to help reduce false positives without sacrificing the usefulness of it.

[–] [email protected] 5 points 1 year ago

Or just making posts approval only with a mod queue

load more comments (3 replies)
load more comments (5 replies)
load more comments (40 replies)