this post was submitted on 09 Jul 2023
2184 points (97.4% liked)

Fediverse

17744 readers
4 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS
 

The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.

Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I'm sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 358 points 1 year ago* (last edited 1 year ago) (26 children)

This was a problem on reddit too. Anyone could create accounts - heck, I had 8 accounts:

one main, one alt, one "professional" (linked publicly on my website), and five for my bots (whose accounts were optimistically created, but were never properly run). I had all 8 accounts signed in on my third-party app and I could easily manipulate votes on the posts I posted.

I feel like this is what happened when you'd see posts with hundreds / thousands of upvotes but had only 20-ish comments.

There needs to be a better way to solve this, but I'm unsure if we truly can solve this. Botnets are a problem across all social media (my undergrad thesis many years ago was detecting botnets on Reddit using Graph Neural Networks).

Fwiw, I have only one Lemmy account.

[–] impulse 168 points 1 year ago (2 children)

I see what you mean, but there's also a large number of lurkers, who will only vote but never comment.

I don't think it's unfeasible to have a small number of comments on a highly upvoted post.

[–] [email protected] 72 points 1 year ago

If it's a meme or shitpost there isn't anything to talk about

[–] [email protected] 33 points 1 year ago (1 children)

Maybe you're right, but it just felt uncanny to see thousands of upvotes on a post with only a handful of comments. Maybe someone who active on the bot-detection subreddits can pitch in.

[–] RedCowboy 21 points 1 year ago (1 children)

I agree completely. 3k upvotes on the front page with 12 comments just screams vote manipulation

load more comments (1 replies)
[–] simple 42 points 1 year ago (8 children)

Reddit had ways to automatically catch people trying to manipulate votes though, at least the obvious ones. A friend of mine posted a reddit link for everyone to upvote on our group and got temporarily suspended for vote manipulation like an hour later. I don't know if something like that can be implemented in the Fediverse but some people on github suggested a way for instances to share to other instances how trusted/distrusted a user or instance is.

[–] cynar 37 points 1 year ago (3 children)

An automated trust rating will be critical for Lemmy, longer term. It's the same arms race as email has to fight. There should be a linked trust system of both instances and users. The instance 'vouches' for the users trust score. However, if other instances collectively disagree, then the trust score of the instance is also hit. Other instances can then use this information to judge how much to allow from users in that instance.

load more comments (3 replies)
[–] [email protected] 20 points 1 year ago

RIP u/unidan

load more comments (5 replies)
[–] BrianTheeBiscuiteer 23 points 1 year ago (2 children)

Yes, I feel like this is a moot point. If you want it to be "one human, one vote" then you need to use some form of government login (like id.me, which I've never gotten to work). Otherwise people will make alts and inflate/deflate the "real" count. I'm less concerned about "accurate points" and more concerned about stability, participation, and making this platform as inclusive as possible.

[–] [email protected] 20 points 1 year ago* (last edited 1 year ago) (5 children)

In my opinion, the biggest (and quite possibly most dangerous) problem is someone artificially pumping up their ideas. To all the users who sort by active / hot, this would be quite problematic.

I'd love to actually see some social media research groups actually consider how to detect and potentially eliminate this issue on Lemmy, considering Lemmy is quite new and is malleable at this point (compared to other social media). For example, if they think metric X may be a good idea to include in all metadata to increase chances of detection, then it may be possible to include this in the source code of posts / comments / activities.

I know a few professors and researchers who do research on social media and associated technologies, I'll go talk to them when they come to their office on Monday.

load more comments (5 replies)
load more comments (1 replies)
[–] [email protected] 21 points 1 year ago* (last edited 1 year ago) (1 children)

I feel like this is what happened when you’d see posts with hundreds / thousands of upvotes but had only 20-ish comments.

Nah it's the same here in Lemmy. It's because the algorithm only accounts for votes and not for user engagement.

load more comments (1 replies)
load more comments (22 replies)
[–] Boozilla 127 points 1 year ago (7 children)

The lack of karma helps some. There's no point in trying to rack up the most points for your account(s), which is a good thing. Why waste time on the lamest internet game when you can engage in conversation with folks on lemmy instead.

[–] Protoknuckles 176 points 1 year ago (3 children)

It can still be used to artificially pump up an idea. Or used to bury one.

[–] danc4498 53 points 1 year ago (6 children)

This is the problem. All the algorithms are based on the upvote count. Bad actors will abuse this.

load more comments (6 replies)
load more comments (2 replies)
[–] [email protected] 53 points 1 year ago

Maybe you move public perception of a product or political goal.
To push a narrative of some kind. Astroturfing basically.

[–] [email protected] 36 points 1 year ago* (last edited 1 year ago) (3 children)

Lack of karma is a fallacy. The default Lemmy UI doesn't display it but the karma system appears to be fully built.

load more comments (3 replies)
[–] [email protected] 35 points 1 year ago (5 children)

Corporations could use it to push their ads to the top

load more comments (5 replies)
[–] reallynotnick 28 points 1 year ago (9 children)

Maybe I'm misunderstanding karma, but Memmy appears to show the total upvotes I've gotten for comments and posts, isn't that basically karma?

load more comments (9 replies)
load more comments (2 replies)
[–] popemichael 90 points 1 year ago (3 children)

You can buy 700 votes anonymously on reddit for really cheap

I don't see that it's a big deal, really. It's the same as it ever was.

[–] [email protected] 56 points 1 year ago (2 children)

Over a houndred dollars for 700 upvotes O_o

I wouldn't exactly call that cheap 🤑

On the other hand, ten or twenty quick downvotes on an early answer could swing things I guess ...

[–] popemichael 44 points 1 year ago (19 children)

For the companies who want a huge advantage over others, $100 is nothing in an advertising budget.

I have a small business and I do $1000 a week in advertising.

[–] OtakuAltair 27 points 1 year ago* (last edited 1 year ago)

Yeah, 700 upvotes soon after a post is made could easily shoot it up to the top of even a popular sub for a few days (specially with the lack of mod tools rn), with others upvoting it purely because it already has alot of upvotes.

load more comments (18 replies)
load more comments (1 replies)
load more comments (2 replies)
[–] czarrie 82 points 1 year ago (6 children)

The nice things about the Federated universe is that, yes, you can bulk create user accounts on your own instance - and that server can then be defederated by other servers when it becomes obvious that it's going to create problems.

It's not a perfect fix and as this post demonstrated, is only really effective after a problem has been identified. At least in terms of vote manipulation from across servers, it could act if it, say, detects that 99% of new upvotes are coming from a server created yesterday with 1 post, it could at least flag it for a human to review.

[–] [email protected] 28 points 1 year ago (2 children)

It actually seems like an interesting problem to solve. Instance runners have the sql database with all the voting record, finding manipulative instances seems a bit like a machine learning problem to me

load more comments (2 replies)
load more comments (5 replies)
[–] [email protected] 82 points 1 year ago (3 children)

In case anyone's wondering this is what we instance admins can see in the database. In this case it's an obvious example, but this can be used to detect patterns of vote manipulation.

[–] [email protected] 38 points 1 year ago

“Shill” is a rather on-the-nose choice for a name to iterate with haha

load more comments (2 replies)
[–] sparr 79 points 1 year ago (17 children)

Web of trust is the solution. Show me vote totals that only count people I trust, 90% of people they trust, 81% of people they trust, etc. (0.9 multiplier should be configurable if possible!)

load more comments (17 replies)
[–] YoBuckStopsHere 61 points 1 year ago (2 children)

Reddit admins manipulated vote counts all the time.

[–] [email protected] 34 points 1 year ago (2 children)

Reddit also created fake users to post fake content... At least in the beginning of reddit.

[–] misterundercoat 27 points 1 year ago (2 children)

TIL "beginning of Reddit" comprises the time up to and including July 2023.

load more comments (2 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 54 points 1 year ago (3 children)

Votes were just a number on reddit too... There was no magic behind them, and as Spez showed us multiple times: even reddit modified counts to make some posts tell something different.

And remember: reddit used to have a horde of bots just to become popular.

Everything on the internet is or can be fake!

load more comments (3 replies)
[–] [email protected] 42 points 1 year ago* (last edited 1 year ago) (7 children)
load more comments (7 replies)
[–] [email protected] 39 points 1 year ago* (last edited 1 year ago) (8 children)

[This comment has been deleted by an automated system]

load more comments (8 replies)
[–] [email protected] 38 points 1 year ago (2 children)

Federated actions are never truly private, including votes. While it's inevitable that some people will abuse the vote viewing function to harass people who downvoted them, public votes are useful to identify bot swarms manipulating discussions.

load more comments (2 replies)
[–] Flashoflight 38 points 1 year ago (1 children)

This is really important to call out. Also though the bots have gotten so good it would be hard to tell the difference. To be honest though I'm pretty sure reddit was teeming withing them and it didn't really bother me. lol

[–] [email protected] 25 points 1 year ago (1 children)

I have strong feelings about reddit being infested with bots too. And because reddit could, there's no reason lemmy doesn't have the same issue.

it didn’t really bother me

Bot armies could have hidden things from you that would bother you deeply, but because it's hidden, you don't have a chance to be bothered.

load more comments (1 replies)
[–] [email protected] 34 points 1 year ago (2 children)

I think people often forget federation is not a new thing, it's a first design for internet communication services. Email, which is predating the Internet, is also federated network and most popular widely adopted of them all modes of Internet communication. It also had spam issues and there where many solutions for that case.

The one I liked the most was hashcash, since it requires not trust. It's the first proof-of-work system and it was an inspiration to blockchains.

load more comments (2 replies)
[–] [email protected] 32 points 1 year ago (1 children)

Honestly, thank you for demonstrating a clear limitation of how things currently work. Lemmy (and Kbin) probably should look into internal rate limiting on posts to avoid this.

I'm a bit naive on the subject, but perhaps there's a way to detect "over x amount of votes from over x amount of users from this instance"? and basically invalidate them?

[–] [email protected] 21 points 1 year ago (2 children)

How do you differentiate between a small instance where 10 votes would already be suspicious vs a large instance such as lemmy.world, where 10 would be normal?

I don't think instances publish how many users they have and it's not reliable anyway, since you can easily fudge those numbers.

load more comments (2 replies)
[–] [email protected] 31 points 1 year ago (5 children)

This is something that will be hard to solve. You can't really effectively discern between a large instance with a lot of users, and instance with lot of fake users that's making them look like real users. Any kind of protection I can think of, for example based on the activity of the users, can be simply faked by the bot server.

The only solution I see is to just publish the vote% or vote counts per instance, since that's what the local server knows, and let us personally ban instances we don't recognize or care about, so their votes won't count in our feed.

load more comments (5 replies)
[–] [email protected] 30 points 1 year ago

PSA: internet votes are based on a biased sample of users of that site and bots

[–] [email protected] 30 points 1 year ago (1 children)

maybe we can show a breakdown of which servers the votes are coming from so anything sus can be found out right away. Like, it would be easy enough to identify a bot farm I'd think

[–] Apoidea 29 points 1 year ago (1 children)

Yep, give admins the tools they need to identify this activity so they can defederate accordingly. Seems like the only way.

load more comments (1 replies)
[–] [email protected] 23 points 1 year ago

So far, the majority of content that approaches spam I've come across on Lemmy has been posts on [email protected] which highlight an issue attributed to the fediverse, but which ultimately have a corollary issue on centralised platforms.

Obviously there are challenges to address running any user-content hosting website, and since Lemmy is a comminity-driven project, it behooves the community to be aware of these challenges and actively resolve them.

But a lot of posts, intentionally or not, verge on the implication that the fediverse uniquely has the problem, which just feeds into the astroturfing of large, centralized media.

[–] [email protected] 22 points 1 year ago (6 children)

Upvotes aren't just a number, they determine placing on the algorithm along with comments. It's easy to censor an unwanted view by mass downvoting it.

load more comments (6 replies)
[–] [email protected] 20 points 1 year ago (2 children)
load more comments (2 replies)
[–] [email protected] 19 points 1 year ago

IMO, likes need to be handled with supreme prejudice by the Lemmy software. A lot of thought needs to go into this. There are so many cases where the software could reject a likely fake like that would have near zero chance of rejecting valid likes. Putting this policing on instance admins is a recipe for failure.

load more comments
view more: next ›