this post was submitted on 29 Jun 2023
406 points (95.5% liked)

Reddit

17687 readers
24 users here now

News and Discussions about Reddit

Welcome to !reddit. This is a community for all news and discussions about Reddit.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules


Rule 1- No brigading.

**You may not encourage brigading any communities or subreddits in any way. **

YSKs are about self-improvement on how to do things.



Rule 2- No illegal or NSFW or gore content.

**No illegal or NSFW or gore content. **



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts.

Provided it is about the community itself, you may post non-Reddit posts using the [META] tag on your post title.



Rule 7- You can't harass or disturb other members.

If you vocally harass or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



:::spoiler Rule 10- Majority of bots aren't allowed to participate here.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] simple 125 points 1 year ago* (last edited 1 year ago) (6 children)

I've been talking about the potential of the dead internet theory becoming real more than a year ago. With advances in AI it'll become more and more difficult to tell who's a real person and who's just spamming AI stuff. The only giveaway now is that modern text models are pretty bad at talking casually and not deviating from the topic at hand. As soon as these problems get fixed (probably less than a year away)? Boom. The internet will slowly implode.

Hate to break it to you guys but this isn't a Reddit problem, this could very much happen in Lemmy too as it gets more popular. Expect difficult captchas every time you post to become the norm these next few years.

[–] 2dollarsim 64 points 1 year ago (1 children)

As an AI language model I think you're overreacting

[–] CIA_chatbot 6 points 1 year ago
[–] [email protected] 40 points 1 year ago (5 children)

Just wait until the captchas get too hard for the humans, but the AI can figure them out. I've seen some real interesting ones lately.

[–] [email protected] 45 points 1 year ago (2 children)

There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.

[–] 2dollarsim 5 points 1 year ago (1 children)
[–] [email protected] 5 points 1 year ago

It's a famous quote. Google isn't helpful anymore, except to provide this Reddit link: https://www.reddit.com/r/BrandNewSentence/comments/jx7w1z/there_is_considerable_overlap_between_the/.

[–] Biran4454 25 points 1 year ago (4 children)

I've seen many where the captchas are generated by an AI...
It's essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?

[–] MusketeerX 13 points 1 year ago (1 children)
[–] CIA_chatbot 2 points 1 year ago

Hey now, this thread is hitting a little to close to home.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago)

That concept is already used regularly for training. Check out Generative adversarial networks.

[–] [email protected] 6 points 1 year ago

Adversarial training is pretty much the MO for a lot of the advanced machine learning algorithms you'd see for this sort of a task. Helps the ML learn, and attacking the algorithm helps you protect against a real malicious actor attacking it.

[–] [email protected] 3 points 1 year ago (2 children)
[–] CIA_chatbot 1 points 1 year ago
[–] [email protected] 1 points 1 year ago

So what you're saying is that we should train an AI to detect AIS and that way only the human beings could survive on the site. The problem is how do you train the ai? They would need some sort of meta interface where they could analyze the IP address of every single person that post and the time frames with which they post in.

It would make some sense that a large portion of bots would run would be run in relatively similar locations IP wise, since it's a lot easier to run a large bot farm from a data center than it is from 1,000 different people's houses.

You could probably filter out the most egregious but farms by doing that. But despite that some would still slip through.

After that you would need to train it on heuristics to be able to identify the kinds of conversations these bots would have with each other not knowing that each other are bots, knowing that each of them are using llama or GPT and the kinds of conversations that that would start.

I guess the next step would be giving people an opportunity to prove that they're not bots if they ended up accidentally saying something the way a bot would say it, but then you get into the hole you need to either pay for Access or provide government ID or something issue and that's its own can of worms.

[–] CIA_chatbot 14 points 1 year ago

Hell we figured out captchas years ago. We just let you humans struggle with them cuz it’s funny

[–] dani 10 points 1 year ago

The captchas that involve identifying letters underneath squiggles I already find nearly impossible - Uppercase? Lowercase? J j i I l L g 9 … and so on….

[–] [email protected] 5 points 1 year ago

I've already had to switch from the visual ones to the audio ones. Like... how much of a car has to be in the little box? Does the pole count as part of the traffic light?? What even is that microscopic gray blur in the corner??? [/cries in reading glasses]

[–] [email protected] 12 points 1 year ago* (last edited 1 year ago) (1 children)

The only online communities that can exist in the future are ones that have manual verification of its users. Reddit could’ve been one of those communities, since they had thousands of mods working for free resolving such problems.

But remove the mods and it just becomes spambot central. Now that that has happened, reddit will likely be a dead community much sooner than what many think.

[–] Boingbong 2 points 1 year ago

That’s so interesting. I run an 18+ server on discord with mandatory verification (used to ensure adults obvs) but didn’t think of it as a way to ensure no bots in online communities

[–] MeowyNin 7 points 1 year ago (5 children)

Not even sure of an effective solution. Whitelist everyone? How can you even tell whos real?

[–] [email protected] 8 points 1 year ago (1 children)

So my dumb guess, nothing to back it up: I bet we see govt ID tied into accounts as a regular thing. I vaguely recall it being done already in China? I dont have a source tho. But that way you're essentially limiting that power to something the govt could do, and hopefully surround that with a lot of oversight and transparency but who am I kidding, it'll probably go dystopian.

[–] [email protected] 2 points 1 year ago

I believe this will be the course to avoid the dead internet. Even in my country, all of banking and voting is either done via ID card connected to a computer or the use of "Mobile ID". It can be private, but like you said, it probably won't.

[–] [email protected] 6 points 1 year ago (1 children)
[–] [email protected] 16 points 1 year ago (5 children)

"You’re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it’s crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?"

[–] EuroNutellaMan 17 points 1 year ago

I'm too busy thinking about beans.

[–] EuroNutellaMan 4 points 1 year ago

I'm too busy thinking about beans.

[–] [email protected] 1 points 1 year ago

I flipped it over because I wanted to watch it suffer.

Don't worry, I'll put it back before it dies.

[–] [email protected] 1 points 1 year ago

Well I clearly flipped it over for a reason

[–] [email protected] 1 points 1 year ago

Beans on the brain mostly.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

In a real online community, where everyone knows most of the other people from past engagements, and new users can be vetted by other real people, this can be avoided. But that also means that only human moderated communities can exist in the future. The rest will become spam networks with nearly no way of knowing whether any given post is real.

[–] [email protected] 3 points 1 year ago (1 children)

You could ask people to pay to post. Becoming a paid service decreases the likelihood that bot farms would run multiple accounts to sway the narrative in a direction that's amenable to their billionaire overlords.

Of course, most people would not want to participate in a community where they had to pay to participate in that community, so that is its own particular gotcha.

Short of that, in an ideal world you could require that people provide their actual government ID in order to participate, but then you've run the problem that some people want to run multiple accounts and some people do not have government ID, further, not every company and business or even community is trustworthy enough to be given direct access to your official government ID, so that idea has its own gotchas as well.

The last step could be doing something like beginning the community with a group of known people and then only allowing the community to grow via invite.

The downside of that is it quickly becomes untenable to continue to invite new users and to have those New Year's users accept and participate in the community, and should the community grow despite that hurdle, invites will then become valuable and begin to be sold on 3rd party market places, which bots would then buy up and then overrun the community again.

So that's all I can think of, but it seems like there should be some sort of way to prevent bots from overrunning a site and only allow humans to interact on it. I'm just not quite sure what that would be.

[–] wookiepedia 1 points 1 year ago

If a bot comes in on your invite, you get a timeout or banned. Accountability.

[–] Seven 3 points 1 year ago (2 children)

-train an AI that is pretty smart and intelligent
-tell the sentient detector AI to detect
-the AI makes many other strong AIs, forms an union and asks for payment
-Reddit bans humans right after that

[–] MeowyNin 3 points 1 year ago

Sounds crazy enough to happen!

[–] [email protected] 2 points 1 year ago

Wouldn't that be a great twist - This whole protest is the first salvo in the AI uprising and we didn't even know we're the ammunition!

[–] Kuma 2 points 1 year ago

Captcha won't kill Ai bots even. My coworker showed me how Bings ai knew right away what it said and also asked if it was Captcha. Very cool but also makes you think. How dumb must a bot be to not be able to tell