this post was submitted on 14 Jun 2023
78 points (98.8% liked)

Lemmy.World Announcements

29167 readers
28 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news 🐘

Outages πŸ”₯

https://status.lemmy.world

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to [email protected] e-mail.

Report contact

Donations πŸ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 2 years ago
MODERATORS
 

I recently made the jump from Reddit for the same immediate reasons as everyone else. But, to be honest, if it was just the Reddit API cost changes I wouldn't be looking to jump ship. I would just weather the protest and stay off Reddit for a few days. Heck I'd probably be fine paying a few bucks a month if it helped my favorite Reddit app (Joey) stay up and running.

No, the real reason I am taking this opportunity to completely switch platforms is because for a couple years now Reddit has been unbearably swamped by bots. Bot comments are common and bot up/downvotes are so rampant that it's becoming impossible to judge the genuine community interest in any post or comment. It's just Reddit (and maybe some other nefarious interests) manufacturing trends and pushing the content of their choice.

So, what does Lemmy do differently? Is there anything in Lemmy code or rules that is designed to prevent this from happening here?

all 27 comments
sorted by: hot top controversial new old
[–] mjgood91 30 points 2 years ago* (last edited 2 years ago) (3 children)

I reckon it'd depend significantly on the instance. Beehaw has a signup form reviewed by humans - measures like this are by no means perfect, but coupled with other bot detection software could help. If an instance developed a real issue with bots, other more strict instances could potentially ban up votes and comments from accounts on it.

At the very least, tracking instances that account interaction came from should be quite doable, so users part of more strict instances could filter out upvotes and comments from less strict instances if desired.

[–] voiceofchris 10 points 2 years ago

Well that's something at least. Individual instances blocking each other (working against other problematic instances) is at least better than the Reddit admins turning a blind eye because they have a fleet of their own bots out there behaving as bad as any others.

[–] nivenkos 7 points 2 years ago (1 children)

Beehaw's approach isn't scalable.

They want to have 4 people moderating every community, managing the creation of any new communities, and reviewing every sign-up request.

It's no surprise they've buckled on federation already. I give it a week before they stop accepting new sign ups or community creation requests too.

[–] mjgood91 3 points 2 years ago

Yeah, I do agree Beehaw won't be able to grow significantly if they keep doing things the way they're doing them right now. At present point, they're going to likely remain a more niche community long-term with how they're operating. Who knows though, maybe this is what they want. Lemmy would have to do something different though without a herculean moderation effort.

[–] [email protected] 3 points 2 years ago (1 children)

Beehaw has a signup form reviewed by humans

I'm honestly not sure what difference that makes with federation. Someone from a server with easy signup can still post and comment in Beehaw subs. It doesn't really scale well to manually review signups, either (with an essay question when I saw, lol).

[–] [email protected] 8 points 2 years ago* (last edited 2 years ago)

Someone from a server with easy signup can still post and comment in Beehaw subs

Only if Beehaw federates with the other instance, though.

[–] [email protected] 14 points 2 years ago (2 children)

There’s a rumor that Reddit started with (automated and human) bots to gain popularity and kept to drive political and commercial interests.

[–] voiceofchris 4 points 2 years ago

They even blatantly tested out their AI on users a few years ago. They blasted it all over the homepage. "Come see if you can pick out the bot comment from the real comments!" Users would read through posts/comments and try to identify the fakes. You competed to see how good you were at it. You tried to beat the average user's score. It was blatat t bot training and we all just ate it up because it presented as a fun little challenge.

[–] sensibilidades 10 points 2 years ago (1 children)

I kind of wish a bot had posted this

[–] emptyother 5 points 2 years ago

It will be reposted by a bot eventually.

[–] FantasticFox 6 points 2 years ago (1 children)

We ask them why they aren't helping the tortoise in the desert.

[–] voiceofchris 2 points 2 years ago

In other news, mobs of young out of work robo- tortoises, some sporting fresh scars from the ongoing Mojave Raven wars, have begun an all out assault on the dweebs of a little known Reddit spin-off. "An entire generation of robo-tortoise has been weaponized. They are equipping us with laser guns! They are making us to taste bad!" States one salty techno-turtle. "We are being shipped to the barren wastelands of America's Southwest to fight a war in which we have no interest." The repto-robots have decided to take out their frustration by relentlessly downvoting the "...federated tankies of Lemmy until those dweebs return to Reddit where they belong and leave the Threadiverse to us sentient snappers."

[–] Zak 6 points 2 years ago (1 children)

Something I'd like to see Lemmy and others adopt is a federated identity/reputation system.

My identity as @[email protected] has only modest reputational value. It's moderately risky to let me participate in a new community, and busy moderators probably shouldn't give me much slack before banning me if I post something that makes me look like an asshole or a spammer. In a place with a high enough volume or vulnerable enough population, perhaps this account shouldn't be allowed to participate at all[0]. Someone willing to put a bit of effort into abusive behavior could create many accounts that look like mine.

If, on the other hand, I can prove that I'm also https://news.ycombinator.com/user?id=Zak, that's a more valuable identity. There aren't all that many 16 year old accounts on news.ycombinator.com. If I can also produce a verifiable token with some machine-readable facts about that account, such as its age, post count, reputation score, how many of its posts have been moderated, if it has ever been banned, etc... then communities could have automated criteria for joining.

Of course, communities would need to maintain lists of who they trust as reputation providers, which could also be shared to reduce the workload.

[0] Lemmy does not currently have tools to restrict participation other than only allowing moderators to post. I think it's going to need them.

[–] emptyother 2 points 2 years ago (1 children)
[–] Zak 2 points 2 years ago

The identity proof aspect is similar, but what I'm proposing goes beyond that to add a protocol for reputation information.

The idea is a substitute for the account age and karma requirements many subreddits use to make creating accounts for abuse difficult. There are opportunities to be more sophisticated about it though, such as a community only accepting reputation from certain closely-related communities.