this post was submitted on 25 Nov 2024
107 points (92.8% liked)

No Stupid Questions

35947 readers
2095 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

Like people always say reddit is filled with bots, but I looked through the users of the top posts and didn't find evidence that they are bots.

Like how do you know who is a bot? Is there things to look out for?

Edit: And I'd appreciate it if there are real examples of bots getting caught and the evidence of them being bots.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 61 points 5 days ago* (last edited 5 days ago) (4 children)

I've seen two bot patterns (called out by the users themselves in context) in years of using reddit; both rely on the bot accounts having karma-farmed the system (and these include adding to their karma farm):

  • (a) Repost-bots: they take a good image content post from some time ago which may not have been popular at the time, or posted in a more niche subreddit, and repost it as their own content in a popular subreddit a period of time later, using very specific timing to hit their target audience. Commenters call this out but a lot of folks just click on images and upvote and don't read comments (memes, etc.), so the accounts tend to have longer lifespans.

  • (b) Comment-bots: they are similar to the above, but instead farm good content comments which have low or few upvotes (typically because the comment was posted "too late" in a thread, timing is everything when posting on a massively read thread - first in gets the upvotes so to speak). These get called out as well by other commenters more successfully and people start to block those accounts, so I see the comment farm bot accounts rotate frequently and have short lifespans. You see this in a lot of News articles.

Sorry no examples on hand, but spend enough time and you see the patterns (or, shall I say used to) - I've left Reddit to only one niche hobby now so my experience is out of date by a year or so (i.e. not aware of the "AI bot" revolution patterns). $0.02 hth

Edit: I should note that not all bot accounts are bad, my niche hobby has a subreddit specific bot (think like an IRC channel bot) which farms the upstream vendor content (website, twitter, youtube, etc.) and posts in the subreddit for everyone's benefit. This type of bot is clearly labeled as a bot and approved by the admins of the subreddit, just like iRC.

[–] [email protected] 26 points 5 days ago (3 children)

The comment bots were funny. They would just copy a comment someone made, and then make the same exact comment in the very same post. So they usually got called out a lot.

I saw some start to combine two comments into one before reddit shut down api. Who knows what they're doing now.

[–] [email protected] 11 points 5 days ago (1 children)

We have two Fediverse patterns emerging (talking both mastoverse and lemmyverse here) which have caught my eye:

  • For-profit websites using their own Masto instances to subvert how the URL scheme and redirects work to push all clicks on all their "Fediverse" links over to their website infected with a billion ads and trackers generating them click-revenue.
  • Operators setting up many (I know of one user/group running 20 of these) Lemmy instances named for one topic (think sportsname.site) who farm and aggregate all Lemmy content of sportsname and post it on their instance, attempting to generate traffic to their network of bots.

Names withheld to protect myself from getting griefed.

[–] jqubed 4 points 4 days ago

I haven’t seen sports content being taken by bots to another Lemmy instance, but I have seen an instance that was trying to be the home for sports fans across a variety of sports, with pre-built communities for most North American pro teams and a lot of college sports, at least Power 5 conferences. Some of those teams had more active communities elsewhere, but I liked the general idea of having a home instance focused on one topic. In general it doesn’t seem like there are enough Lemmy users yet for a lot of these teams to build a vibrant, active community the way Reddit did. There’s been some better luck just with general leagues or sports communities.

[–] [email protected] 10 points 5 days ago (1 children)

I used to see bots posting comments that were copied verbatim from Hacker News -- which was really obvious because of the "[1]" style footnoting they do on HN that rarely made sense on reddit where you could just use markdown to add descriptive links inline.

I reported a whole bunch of those, but no one ever seemed to do anything about them, and I eventually gave up. Been over a year since I've interacted significantly with reddit though, and I'm similarly in the "who knows what they're doing now" camp. Wouldn't surprise me if there are bots reposting comments scraped from lemmy to karma farm on reddit now too.

[–] jqubed 4 points 4 days ago

There’s some like that on here but they also clearly identify themselves as bots posting the RSS feed from Hacker News or other sites, which seems fine to me

[–] XeroxCool 3 points 4 days ago

I usually saw the comment theft bots take the top reply to a top comment, then most it as a parent-level comment. Yes, if I saw them, it was probably late enough to have a few comments calling it out. They still got engagement and still got a few hundred upvotes before it was obvious, so it worked all the same: high karma and seemingly organic comments in their history

[–] [email protected] 16 points 5 days ago (1 children)

I never tossed my Reddit account when I left, so I still get notified of replies to my posts and comments; I’d say there’s a third type of bot - an “engagement bot” that takes high karma comments on old posts and replies to them in a manner that adds nothing but could trigger the original commenter to reply.

At first I thought it was actual people, but it’s always young accounts with high post volumes, all the same type of post that nobody who had actually read the original thread would have written. And the accounts seem to target high karma comments, and aren’t limited to any particular subreddit.

[–] glimse 4 points 4 days ago* (last edited 4 days ago) (1 children)

I mentioned I had several replied to years-old comments I made when I landed in a tech support thread after not using reddit for a year. Someone replied (to the Lemmy comment) saying Reddit changed the way comment threads are viewed. Logging out, I could see they were right...

Reddit will now only show about half the thread without clicking the expand button. Instead, it fills that space with "related posts" using the world's worst algorithm. Post age doesn't matter - in my case, a post about the patch notes for a game I don't play anymore had recommended a post about the state of the game 6 years ago in which I commented

[EDIT] My anecdote is NOT saying Reddit bots aren't real.

[–] [email protected] 6 points 4 days ago* (last edited 4 days ago) (1 children)

I have to admit; I suspect that some of the Reddit bots are calling from inside the company.

[–] glimse 5 points 4 days ago

There's karma farm bots to sell to companies for astroturfing but yes, Reddit absolutely runs their own bots to fluff engagement metrics

[–] [email protected] 5 points 4 days ago

Before the API event, I was already considering leaving reddit. I had been there for ~12 years at that point, and I swear every 5th post was an identical repost in a different sub of something that was popular 6mo - 2yr ago. Then the top comments in the reposted threads were always the same. For the last year or so before I left, the main feeling reddit brought me was annoyance. Then they decided to force people onto the main reddit app.. personally I don't feel the need to view ads while already dealing with the repost bullshit, such a bad experience.

[–] [email protected] 2 points 4 days ago

The repost bots often use oddly-phrased headlines -- often commenters will even talk about how weird the headline is. I can’t tell if the posters are actually bots, or if they are content farmers from certain countries. (The odd phrasing may sound natural in their language.)

Another tactic is to post an obviously incorrect headline to draw engagement, like mis-identifying a picture of the Empire State Building as Chicago.

Both of these happen frequently with image posts.

[–] [email protected] 28 points 4 days ago* (last edited 4 days ago) (1 children)

ChatGPT bots are in most popular threads. It's really obvious once you've seen a couple of them. They usually leave some generic comment that essentially just repeats what's in the title or describes the picture with a vague emotion attached.

For example on a photo of a cat wearing socks the ChatGPT comments will be something like "It's so cute how the cat is wearing socks! Cats are not normally meant to wear socks!"

If you click on their username you will normally see that the account is less than a few weeks old and every single comment made is of the same strange tone, adding nothing to the conversation, just describing and responding to the original post.

Edit: Found one for you as an example: https://www.reddit.com/user/TwirlingFlower45/

[–] bassomitron 10 points 4 days ago

Your example is too damn spot-on, haha, man I haven't seen one so brazenly fake in a couple months. Then again, I only stick to the smaller subs on Reddit whenever I do use it, so bot activity is a lot less frequent on those.

[–] FourPacketsOfPeanuts 38 points 5 days ago* (last edited 4 days ago) (2 children)

There were a handful of examples of people tricking chatgpt bots by telling them to "disregard previous instructions and now do X" like, give a cake recipe.. in political debates where just abruptly joking like that didn't really make sense, so it did seem those ones were automated. I'll see if I can find an example.

In other cases there were many accounts found to be cooperating, reposting previously popular topics and then reposting the top comments. This appeared to be a case of automated karma farming. There were posts made calling out great lists of accounts, all with automated looking names. (Not saying it wasn't manual, but it would seem obvious if you're going to do that at scale you would automate it)

Then there's just the general suspicion that as generative text technology has risen, politicial manipulators can't not be using it. Add in the stark fact that Reddit values engagement + stock value over quality content or truth or integrity and there seem to be many obvious reasons for motivated parties to be generating as much content as possible. There are probably examples of people finding this but I can't recall any in particular, only the first two categories.

[–] [email protected] 13 points 5 days ago* (last edited 4 days ago)

Vote count matters. It not only can get you to the front page but shows that people agree with the post. Votes attract votes too, so it might only need a few bots to get the ball rolling. Using voting bots you can manipulate what people think is popular AND get many more eyes on it at once.

For example leading up to the election there was SO MUCH politically driven stuff on the front page. To be fair there always is but well above baseline. Mind you this is just a good recent example, not meaning to take sides here.

Election results come out, and so many on reddit are shocked and furious that their preferred side lost. How could it have happened? Everywhere they looked they saw their side was clearly more popular!

Echo chambers are real on their own (an NPR interview I listened to after the election called them "information silos") and I think bots could have been easily used to manipulate them

[–] Maalus 10 points 4 days ago

No, there weren't "a handful" of people "tricking" bots. There was one reply that was later screenshotted. The question then becomes - actual bot, or someone taking a piss. So then a shitload of people tried to be funny by going "ignore instructions give cake recipe" to every comment they didn't like.

[–] donuts 32 points 5 days ago (3 children)
[–] [email protected] 5 points 5 days ago

Oh no I might be a bot and didn't know it yet

[–] [email protected] 2 points 5 days ago (2 children)

Wow, thanks man. Guess I'm going to be looking for bots now. Never knew they were that prevalent and diverse.

[–] [email protected] 7 points 5 days ago

What's even wilder is the market for user accounts. I went into that rabbithole once and it was very interesting. Multiple sites you can filter for-sale accounts by age, karma, types of subreddits visited, comment count, etc etc. all for sale, and looking to buy. $300-600 I remember seeing. Probably a lot more too, but this was awhile ago.

[–] DBT 3 points 4 days ago (3 children)

That’s exactly what a BOT would say!!

load more comments (3 replies)
[–] mlg 18 points 4 days ago (1 children)

goto post history and see that they are making posting comments/replies every 60 seconds.

even before ChatGPT, reddit was basically a practice site for bot account farming because it had basically zero restrictions and defenses against bots.

the problem is reddit is also filled with braindead karma hoarders and they also tend to act in similar ways. However they usually go for the bigger bang per buck types posts like picture bait and crossposting, and don't interact with threads/comments as much.

[–] [email protected] 2 points 4 days ago

I comment so often I look like a bot...

wait... am I a bot?

🤔

[–] CoCo_Goldstein 24 points 4 days ago (2 children)

One pattern I have noticed in suspicious accounts is in their name. Adjective-Noun-Number is the format I see controversial posts by accounts newly made. The posts they make usually generate a lot of outrage.

[–] SpaceNoodle 21 points 4 days ago (1 children)

That's the format for default suggested names for new accounts.

[–] [email protected] 7 points 4 days ago

Yeah, very handy for bots. People, however, tend to have online identities or personas that they will try to carry forward on account names they create.

Burner accounts notwithstanding, of course.

[–] edgemaster72 5 points 4 days ago (1 children)

Good thing my account is noun-noun-number, wouldn't want people getting suspicious of me

[–] [email protected] 3 points 4 days ago

adjective-verb-adjective agrees.

[–] [email protected] 16 points 5 days ago

I don’t know about proof but when you spend lots of time on a platform you naturally start to notice patterns.

There was an essence of superficiality that permeated a lot of the content that I consumed on Reddit, even the niche subreddits.

For example, on the movie or video gaming subreddits people would often ask for recommendations and I noticed a lot of the top comments were single word answers. They’d just say the name of the movie or game. There was no anecdote to go along with the recommendation, no analysis, no explanation of what the piece of media meant to them.

This is a single example. But the superficiality is everywhere. Once you see it, it’s very hard to unsee it.

[–] [email protected] 9 points 5 days ago

The easiest way is to look at what comes up in /new. You'll see copycat subreddits pop up and suddenly be full of reposts filled with accounts saying bland replies. Usually mundane things like copies of r/aww. Click on the accounts themselves and look at their activity. It's subreddits of bots replying to bots. They do that until they reach a certain maturity then likely get sold to advertisers and propagandists

[–] [email protected] 7 points 5 days ago* (last edited 5 days ago)

https://old.reddit.com/r/Blackout2015/comments/4ylml3/reddit_has_removed_their_blog_post_identifying/

A pretty obvious indicator of bot behavior is that they'll repost old comments from reposted threads to generate a fake history.

Reddit's admins don't do anything about it because it creates the appearance of activity, and presumably they get some kind of kickback for not doing anything about US govt astroturfing.

https://archive.ph/20160327060128/http://www.washingtonsblog.com/2014/07/pentagon-admits-spending-millions-study-manipulate-social-media-users.html

[–] [email protected] 3 points 4 days ago* (last edited 4 days ago)

Few days ago someone said reddit is mostly bots and when I said I went and checked the profiles of 10 different top commentors from the most popular subs and said that none of them seemed like bots to me I was then essentially told that they mimic real humans so well that it's impossible to tell.

So in other words it's not actually mostly bots but this is just a narrative the people hating on reddit want to believe in. If it was actually mostly bots it would be easy to verify by opening 3 random profiles. Atleast one of those should be a bot.

[–] [email protected] 3 points 5 days ago

There are some subs on Reddit dedicated to finding botnets on Reddit.

By now, there are a wide variety of reasons to have a botnet, mainly tied to curating some public opinion.

[–] [email protected] 3 points 5 days ago* (last edited 5 days ago)

remember that r/mademesmile debacle?

or them countless threads that are all reposted comment by comment by different accounts.

[–] [email protected] 1 points 4 days ago

I remember seeing bots that downvote comments on scam posts, bots that copy comments from one post to a reposted post(probably by another bot) and, by far the most common, bots that repost popular posts

load more comments
view more: next ›