this post was submitted on 21 Oct 2024
742 points (98.8% liked)

Technology

58815 readers
5267 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] brucethemoose 4 points 17 hours ago* (last edited 17 hours ago) (1 children)

Don't they flag stuff automatically?

Not sure what they're using on the backend, but open source LLMs that take image inputs are good now. Like, they can read garbled text from a meme and interpret it with context, easily. And this is apparently a field thats been refined over years due to the legal need for CSAM detection anyway.

[โ€“] T156 2 points 15 hours ago

They do, but they'd still need someone to go through the flagging and check. Reddit gets away with it as it is like Facebook groups do, by offloading the moderation to users, with the admins only being roped in for ostensibly big things like ban evasion/site wide bans, or lately, if the moderators don't toe the company line exactly.

I doubt that they would use an LLM for that. That's very expensive and slow, especially for the volume of images that they would need to process. Existing CSAM detectors aren't as expensive, and are faster. They basically compute a hash for the image, and compare it to known hashes for CSAM.