this post was submitted on 24 Jul 2023
211 points (81.7% liked)

Technology

34437 readers
183 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Not a good look for Mastodon - what can be done to automate the removal of CSAM?

you are viewing a single comment's thread
view the rest of the comments
[–] Reddit_was_fun 0 points 1 year ago* (last edited 1 year ago) (9 children)

The article points out that the strength of the Fediverse is also it’s downside. Federated moderation makes it challenging to consistently moderate CSAM.

We have seen it even here with the challenges of Lemmynsfw. In fact they have taken a stance that CSAM like images with of age models made to look underage is fine as long as there is some dodgy ‘age verification’

The idea is that abusive instances would get defederated, but I think we are going to find that inadequate to keep up without some sort of centralized reporting escalation and ai auto screening.

[–] [email protected] 15 points 1 year ago (2 children)

The problem with screening by AI is there's going to be false positives, and it's going to be extremely challenging and frustrating to fight them. Last month I got a letter for a speeding infraction that was automated: it was generated by a camera, the plate read in by OCR, the letter I received (from "Seat Pleasant, Maryland," lol) was supposedly signed off by a human police officer, but the image was so blurry that the plate was practically unreadable. Which is what happened: it got one of the letters wrong, and I got a speeding ticket from a town I've never been to, had never even heard of before I got that letter. And the letter was full of helpful ways to pay for and dispense with the ticket, but to challenge it I had to do it it writing, there was no email address anywhere in the letter. I had to go to their website and sift through dozens of pages to find one that had any chance of being able to do something about it, and I made a couple of false steps along the way. THEN, after calling them up and explaining the situation, they apologized and said they'd dismiss the charge--which they failed to do, I got another letter about it just TODAY saying a late fee had now been tacked on.

And this was mere OCR, which has been in use for multiple decades and is fairly stable now. This pleasant process is coming to anything involving AI as a judging mechanism.

[–] mothringer 7 points 1 year ago* (last edited 1 year ago)

THEN, after calling them up and explaining the situation, they apologized and said they'd dismiss the charge--which they failed to do

That sounds about right. When I was in college I got a speeding ticket halfway in between the college town and the city my parents lived in. Couldn't afford the fine due to being a poor college student, and called the court and asked if an extension was possible. They told me absolutely, how long do you need, and then I started saving up. Shortly before I had enough, I got a call from my Mom that she had received a letter saying there was a bench warrant for my arrest over the fine

load more comments (6 replies)