this post was submitted on 23 Jun 2023
2472 points (95.9% liked)

Lemmy

2172 readers
1 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to [email protected].

founded 4 years ago
MODERATORS
 

Please. Captcha by default. Email domain filters. Auto-block federation from servers that don't respect. By default. Urgent.

meme not so funny

And yes, to refute some comments, this publication is being upvoted by bots. A single computer was needed, not "thousands of dollars" spent.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 63 points 1 year ago* (last edited 1 year ago) (5 children)

The admin https://lemmy.dbzer0.com/u/db0 from the lemmy.dbzer0.com instance possibly made a solution that uses a chain of trust system between instances to whitelist each other and build larger whitelists to contain the spam/bot problem. Instead of constantly blacklisting. For admins and mods maybe take a look at their blog post explaining it in more detail. https://dbzer0.com/blog/overseer-a-fediverse-chain-of-trust/

[–] Ech 25 points 1 year ago (2 children)

So defeating the point of Lemmy? Nah, that's a terrible "solution" that will only serve to empower big servers imposing on smaller or even personal one's.

[–] prlang 16 points 1 year ago (1 children)

It's probably the opposite. I'd say that right now, the incentives for a larger server with an actual active user base is to move to a whitelist only model, given the insane number or small servers with no activity but incredibly high account registrations happening right now. When the people controlling all of those bot accounts start flexing their muscle, and flooding the fediverse with spam it'll become clear that new and unproven servers have to be cut off. This post just straight up proves that. It's the most upvoted Lemmy post I've ever seen.

If I'm right, and the flood of spam commeth then a chain of trust is literally the only way a smaller instance will ever get to integrate with the wider ecosystem. Reaching out to someone and having to register to be included isn't too much of an ask for me. Hell, most instances require an email for a user account, and some even do the questionnaires.

[–] Ech 5 points 1 year ago (1 children)

When those "someone"s are reasonable, sure, it won't be bad, but when they're not? Give the power of federation to a few instances, and that's not just a possibility, but an inevitability.

We already know Meta is planning to add themselves to the Fediverse. Set down this path and the someone deciding who gets access and how will end up being Zuck, or someone like him. That sound like a good future to you?

[–] prlang 3 points 1 year ago

Sorry for the late response, I fell asleep.

Yeah I'm concerned about that too. It really doesn't matter what anyone does if a group the size of Meta joins the fediverse though. They have tens of thousands of engineers working for them, and billions of users, they can do whatever the hell they want and it'll completely swamp anyone else's efforts.

Zuck wanting to embrace, extend, and extinguish the ActivityPub protocol is a separate issue though. The way a chain of trust works, when you grant trust to a third party, they can then extend trust to anyone they want. So for instance, if the root authority "A" grants trust to a second party "B", then "B" can grant trust to "C", "D", and "E". If "A" has a problem with the users of "E", the only recourse he has is to talk to "B" and try to get them to remove "E", or ban "B" through "E" altogether. I think we can both agree that the latter action is super drastic, it mirrors what Behaw did, and will piss a lot of people off.

So if you run that experiment, and any particular group can become a "root" set of authority for the network, I'd speculate that the most moderate administrators will likely end up being the most widely used over time. It's kinda playing out like that at a small scale right now with the Behaw/Lemmy.world split. Lemmy.world is becoming the larger instance, Behaws still there but just smaller and more moderated.

People can pick the whitelists they want to subscribe to. Who gets to participate in a network really just comes down to the values of the people running and participating in it. A chain of trust is just a way to scale people's values in a formal way.

[–] [email protected] 4 points 1 year ago

The (simplified) way it works is it reads data from the public observer's API and check if ((total users > (totalPosts + totalComments) > susScore) as a "suspicious" community. "susScore" is configurable if you want to run your own instance of it.

[–] [email protected] 22 points 1 year ago (1 children)

db0 probably knows what they're talking about, but the idea that there would be an "Overseer Control Plane" managed by one single person sounds like a recipe for disaster

[–] [email protected] 8 points 1 year ago* (last edited 1 year ago) (1 children)

I hear you. For what it's worth it is mentioned in the end of the blog post, the project is open source, people can run their own overseer API and create less strict or more strict whitelists, instances can also be registered to multiple chains. Don't mistake my enthousiasm for self run open social media platforms for trying to promote a single tool as the the be-all and end-all solution. Under the swiss cheese security model/idea, this could be another tool in the toolbox to curb the annoyance to a point where spam or bots become less effective. Edit: *The be-all and end-all *not be and end all solution

[–] prlang 11 points 1 year ago (1 children)

Couldn't agree more. I gatta say though I kinda find it funny that the pirate server is coming up with practical solutions for dealing with spam in the fediverse. I guess it shouldn't though, y'all have been dealing with this distributed trust thing for a while now eh?

[–] [email protected] 2 points 1 year ago

When you're a swashbuckling pirates in the lawless seven seas, you gotta come up with clever ways to enforce your ship's code of conduct.

[–] [email protected] 15 points 1 year ago (1 children)

Obviously biased, but I'm really concerned this will lead to it becoming infeasible to self-host with working federation and result in further centralization of the network.

Mastodon has a ton more users and I'm not aware of that having to resort to IRC-style federation whitelists.

I'm wondering if this is just another instance of kbin/lemmy moderation tools being insufficient for the task and if that needs to be fixed before considering breaking federation for small/individual instances.

[–] [email protected] 6 points 1 year ago (2 children)

He explained it already. It looks for a ratio of number of users to posts. If your "small" instance has 5000 users and 2 posts, it would probably assume a lot of those users would be spam bots. If your instance has 2 users and 3 posts, it would assume your users are real. There's a ratio, and the admin of each server that utilizes it can control the level at which it assumes a server is overrun by spam accounts.

[–] [email protected] 2 points 1 year ago

Okay, so how do you bootstrap a new server in that system?

What do you do when you just created a server and can't get new users because you aren't whitelisted yet?

But what if you do handful of users to start out, or just yourself? How do become 'active' without being able to federate with any other servers? Talk with yourself?

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

The issue is that it could still be abused against small instances.

For example, I had a bit less than 10 bots trying to signup to my instance today (I had registration with approval on) and those account are reported as instance users even though I refused their registration. Because of this my comment/post ratio per user got a big hit with me being unable to do anything (other than delete those accounts directly from the db).

So even if you don't allow spam accounts to get into your instance, you can easily get blacklisted from that list because creating a few dozen thousands account registration requests isn't that hard even against an instance protected by captcha.

[–] eekrano 2 points 1 year ago

Comment / post ratio is useless as well for this though.

  1. Create a server
  2. Create 10,000 bot accounts
  3. Have 85% of bot accounts create a random post
  4. Have 40% of post a comment on the main level posts

Looks like I pretty busy, totally real server by the aforementioned metric

[–] ulu_mulu 6 points 1 year ago (1 children)

Who controls the Overseer Control?

[–] prlang 9 points 1 year ago

It's been answered further below. Yeah it's that one bloke who did it at https://lemmy.dbzer0.com/u/db0 . The projects also open source though, so anyone can run their own Overseer Control server, with their own chain of trust whitelist. I suspect many whitelists will pop up as the fediverse evolves.