this post was submitted on 09 Jul 2023
2184 points (97.4% liked)
Fediverse
17852 readers
7 users here now
A community dedicated to fediverse news and discussion.
Fediverse is a portmanteau of "federation" and "universe".
Getting started on Fediverse;
- What is the fediverse?
- Fediverse Platforms
- How to run your own community
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Reddit had ways to automatically catch people trying to manipulate votes though, at least the obvious ones. A friend of mine posted a reddit link for everyone to upvote on our group and got temporarily suspended for vote manipulation like an hour later. I don't know if something like that can be implemented in the Fediverse but some people on github suggested a way for instances to share to other instances how trusted/distrusted a user or instance is.
An automated trust rating will be critical for Lemmy, longer term. It's the same arms race as email has to fight. There should be a linked trust system of both instances and users. The instance 'vouches' for the users trust score. However, if other instances collectively disagree, then the trust score of the instance is also hit. Other instances can then use this information to judge how much to allow from users in that instance.
LLM bots has make this approach much less effective though. I can just leave my bots for a few months or a year to get reputation, automate them in a way that they are completely indistinguishable from a natural looking 200 users, making my opinion carry 200x the weight. Mostly for free. A person with money could do so much more.
It's the same game as email. An arms race between spam detection, and spam detector evasion. The goal isn't to get all the bots with it, but to clear out the low hanging fruit.
In your case, if another server noticed a large number of accounts working in lockstep, then it's fairly obvious bot-like behaviour. If their home server also noticed the pattern and reports it (lowers the users trust rating) then it wont be dinged harshly. If it reports all is fine, then it's also assumed the instance might be involved.
If you control the instance, then you can make it lie, but this downgrades the instance's score. If it's someone else's, then there is incentive not to become a bot farm, or at least be honest in how it reports to the rest.
This is basically what happens with email. It's FAR from perfect, but a lot better than nothing. I believe 99+% of all emails sent are spam. Almost all get blocked. The spammers have to work to get them through.
This will be very difficult. With Lemmy being open source (which is good), bot maker's can just avoid the pitfalls they see in the system (which is bad).
That's such a hilariously bad metric for detecting a bot network too. It wouldn't even work to detect a real one, so all that policy ever did was annoy real users.
Hearing that, I wonder if they were using an IP address based system. That would cause real problems for people using a VPN, but it wouldn't surprise me.
RIP u/unidan
nope, i tried manipulating votes from apollo once and got a warning
nope, so that's probably it
I got that message too when switching accounts to vote several times. They can probably see it's all coming from the same ip.