this post was submitted on 23 Jun 2023
117 points (99.2% liked)

Lemmy

2172 readers
1 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to [email protected].

founded 4 years ago
MODERATORS
 

I have the application process enabled for people to join my instance, and I've gotten about 20 bots trying to join today when I had nobody trying to join for 5 days. I can tell because they are generic messages and I put a question in asking what 2+3 is and none of them have answered it at all, they just have a generic message.

Be careful out there, for all you small instance admins.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 14 points 1 year ago (7 children)

Why are these bot operators going through the hassle of joining existing instances... couldn't they just set up their own, since instances would need to manually defederate them after they spam?

I wonder how difficult it would be to take a Formspree-style approach to combat the bots, using a hidden form field

[–] [email protected] 16 points 1 year ago (2 children)

Because you can't make thousands of spambots on your own instance because as you noted it'd take about 5 minutes to defederate and thus remove all the bots.

You want to put a handful on every server you can, because then your bots have to be manually rooted out by individual admins, or the federation between instances gets so broken there's no value in the platform.

And for standing up more instances, you have to bear the cost of running the servers yourself, which isn't prohibitive, but more than using bots via stolen/infected proxies (and shit like Hola that gives you a "free vpn" at the cost of your computer becoming an exit node they then resell).

Also, I'm suspicious that it's not 'spam bots' in the traditional sense since what's the point of making thousands of bots but then barely using them to spam anyone? My tinfoil hat makes me think this is a little more complicated, though I have zero evidence other than my native paranoia.

[–] [email protected] 7 points 1 year ago (1 children)

undefined> Also, I’m suspicious that it’s not ‘spam bots’ in the traditional sense since what’s the point of making thousands of bots but then barely using them to spam anyone?

This is Twitter and web forum spam 101, you establish a bunch of accounts while there are very few controls, then you start burning them over time as you get maybe one shot to mass spam with each of them before they get banned.

[–] [email protected] 6 points 1 year ago (1 children)

It's always about following the money for spammers/malware/etc. authors: there's (usually) a commercial incentive they're pushing towards.

The bot is evolving and adapting to countermeasures and becoming "smarter" which means some human somewhere is investing time and effort in doing this, which means there's some incentive.

That said, I doubt it's strictly commercial because the Lemmy user base is really small and probably not worth much because if you're here you're most certainly not on the area of the bell curve that'll fall for the usual spambot commercialization double-your-money/fake reviews/affiliate link/astroturfing approaches.

I'd wager it's more about the ability to be disruptive than the ability to extract money from the users you can target, so like, your average 16-year-old internet trolls.

[–] [email protected] 1 points 1 year ago (1 children)

What are the typical actors in the Reddit and twitter spam scene? And what's the likelihood of each type setting up on here now?

  • Product spamming, to advertise.

  • PR companies that offer to sway community opinions, upvote/down vote for their clients.

  • State actors with various propaganda intent.

  • Preparing the bot accounts early in order to sell them to PR companies or other actors above.

  • Actors incentivized to try to turn this service into a shit hole to keep users in the normal channels for some reason or other. Give it financial incentives or ability to control narratives on other platforms.

  • Bots push financial related news stories or sentiment, eg. Trying to pump crypto markets.

These are just ideas off of the top of my head of the type of bots or actors running them. But I don't really have any experience with it, just wondering what everyone's thinking the intent is.

[–] [email protected] 2 points 1 year ago

I think that's likely to cover common uses outside of just 'for the lulz'.

The for the lulz resonates a lot with me - though I know that a decade of dealing with a lot of these types assuredly biases me to at least some degree - because it's easy enough to do what they're doing now AFTER you figure out how you're going to monetize it and signups this aggressive and so widespread doesn't really make sense to me.

In my experience with content moderation/fraud/abuse work, I found that you'd often have a very slow trickle of accounts sign up over weeks/months/and, in one situation, years, and THEN they'd all break bad and you'd have entire servers and instances all light on fire at once and result in a mess that'll take a very long time to clean up.

If you have 5,000 users that signed up all at once you can literally just delete all those rows from the database and probably not impact too many real people vs. if you have 5,000 users sign up over 6 months then you have the data dispersed in good data and now have much more of an involved spelunking expedition to embark on. I also found that it was typically done in waves as well, so you can't do a single clean and go 'well all the accounts that weren't doing thing must be okay' because eh, maybe not.

And, also, there's a lot of hand-wringing about developer and instance politics from various blog posts, "news" sources, the fediverse, traditional social media and so on from all sides of the spectrum, and while I'd never claim to be a centrist or even remotely moderate, the more embedded in one extreme or another you find yourself you can start justifying doing all sorts of stupid shit, and a DDoS (which, quelle surprise is ongoing right now) is SO trivial to do when there's not a whole lot of preventative measures in place that don't require a bunch of squabbling internet humans to cooperate and work together to block signups, clean up the mess that's already there, and work with each other on mitigation tools that do things everyone agrees with.

[–] [email protected] 4 points 1 year ago (1 children)

... How many comments would each of 5M bot accounts need to make to overflow an i32 db key ... I also think it looks as if someone is testing disruptive stuff. It may be kids playing, or it may be the chatbot army in preparation.

[–] [email protected] 2 points 1 year ago (1 children)

I'm not a Postgres expert but a quick look at the pgsql limits looks like it's 4 billion by default, which uh, makes sense if it's a 32 bit limit.

Soooo 5 million users would need to make.... 800 posts? ish? I mean, certainly doable if nobody caught it was happening until it was well into it.

[–] [email protected] 2 points 1 year ago

Aha that's a postgres default? I was looking into the code to see some of the DB structure. And i thought, well i made over 100 comments in 2 weeks so it wouldn't take too long until that 32-bit space is used up (in normal operation with some more users).

[–] [email protected] 10 points 1 year ago* (last edited 1 year ago) (1 children)

Detecting and blocking whole instances with many bots is somewhat trivial. Blocking and detecting some number of bots in an instance with 10k users, with an ever growing number of human users, is much harder.

[–] [email protected] 3 points 1 year ago (1 children)

Setting up an instance would be more difficult too I assume

[–] [email protected] 2 points 1 year ago

It's honestly not too bad, only took about an hour after researching a couple of days. There's an easy deploy script out there, that I don't have a link for on my phone, that makes it really easy.

[–] [email protected] 8 points 1 year ago

When the whole instance is spam, it's easy to defederate. When it is camouflaged in a legit instance, it's harder to root out.

[–] [email protected] 6 points 1 year ago (1 children)

My guess would be because it is more difficult for other instances to deal with instances that have a combination of bots and actual users.

[–] [email protected] 2 points 1 year ago

This.

You just domain or IP block a bot server. Maybe you don't want to block a place with a history, and people.

And smaller sites are using the application form. SJW and Lemmy.world are much more ripe for setting up on, because it's a much bigger decision to block them.

[–] fperson 4 points 1 year ago (1 children)

Why are these bot operators going through the hassle of joining existing instances

I wonder if there's already a "the bots are from Reddit" conspiracy :D

I really see no point in these actions. I mean, seriously, why would you want to just harm something open?

[–] dot20 2 points 1 year ago

For the same reasons you'd want to harm any other platform.

[–] [email protected] 4 points 1 year ago

I think the other user nailed it. It's easy to look at the list of lemmy servers and defederate the bot farms by comparing "active users" to "total users". I guess once the bots are active that will look a bit different.

[–] [email protected] 2 points 1 year ago

They'd get Fediblocked super quickly and then it's just a quick copy and paste by, you know, like, 5 guys that administer 90% of Lemmybin users and they're shut down.