this post was submitted on 20 Jun 2023
80 points (98.8% liked)

Asklemmy

44151 readers
1324 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

Been watching over the recent surge in fediverse users for about a week now. Last week, it was climbing what i would call naturally, organically. Now for the last couple days, its been like 350k in the last 2 days.

Love to see the growth of users, but these have to be bot created accounts. I dont want this to be bot infested community. I see the value in bots when used correctly, but lets be real - general population and bots could ruin this community.

Is there anything planned? Is there work from some third party to throw off the "stableness" of Lemmy / fediverse?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 0 points 2 years ago (1 children)

What kind of tooling do you envision to find bot users?

[โ€“] [email protected] 1 points 2 years ago

I'm not sure. At a user level perhaps some sort of tracking of logins, posting frequency that sort of stuff. If a user signed up and immediately starts making hundreds of posts, something is probably up and an admin should be made aware somehow. If a dormant account wakes up and starts posting a lot, maybe an admin should take a casual look. Also, as much as people seem to hate it, track some IP addresses, at least temporarily. If 100+ accounts all sign up from one IP in the space of an hour, they are probably less than legitimate.

Assuming the problem is posts and comments by bots there could be something that looks for known spam copypasta, previously moderated/admin'd content, or keywords could be enough on a small instance. Going further perhaps something that reads the posts from users of your instance, has them classified based on previous admin actions (and probably some manual work to flag things as "known good"), and trains some sort of classifier (bayes/markov/ml/whatever). Such tools already exist and are in wide use for email spam filtering and the like. They aren't perfect, but would make an ok first line of defense that can raise things to the attention of the admin.

I am sure you could go further down the automation side, but I would imagine all of these are "human in the loop" sort of things. Once a user/post/whatever gets flagged it generates some sort of report for an admin to take a look at. I don't know how much of this stuff like automoderator or mod bots did on reddit, but a decent amount of it would probably be transferable however it was done.

Perhaps some/all of this doesn't get put into Lemmy itself but can interact through admin APIs and/or the database. I would start at just basic things in Lemmy itself as at the moment there is hardly any admin interface to Lemmy at all. If I just want a list of the users on my instance I have to query the database. Make deleting/purging users easier (I have heard from some admins having bot trouble that it was easier to vs than delete them). Properly split out the modlog per community, show all the details of the action, and show whether something was a mod or admin action.