News and Discussions about Reddit
Welcome to !reddit. This is a community for all news and discussions about Reddit.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules
Rule 1- No brigading.
**You may not encourage brigading any communities or subreddits in any way. **
YSKs are about self-improvement on how to do things.
Rule 2- No illegal or NSFW or gore content.
**No illegal or NSFW or gore content. **
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts.
Provided it is about the community itself, you may post non-Reddit posts using the [META] tag on your post title.
Rule 7- You can't harass or disturb other members.
If you vocally harass or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
:::spoiler Rule 10- Majority of bots aren't allowed to participate here.
view the rest of the comments
I've been talking about the potential of the dead internet theory becoming real more than a year ago. With advances in AI it'll become more and more difficult to tell who's a real person and who's just spamming AI stuff. The only giveaway now is that modern text models are pretty bad at talking casually and not deviating from the topic at hand. As soon as these problems get fixed (probably less than a year away)? Boom. The internet will slowly implode.
Hate to break it to you guys but this isn't a Reddit problem, this could very much happen in Lemmy too as it gets more popular. Expect difficult captchas every time you post to become the norm these next few years.
As an AI language model I think you're overreacting
Me too!
Just wait until the captchas get too hard for the humans, but the AI can figure them out. I've seen some real interesting ones lately.
holy fuck dude hahahahaha
It's a famous quote. Google isn't helpful anymore, except to provide this Reddit link: https://www.reddit.com/r/BrandNewSentence/comments/jx7w1z/there_is_considerable_overlap_between_the/.
I've seen many where the captchas are generated by an AI...
It's essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?
An AI Special Operation
Hey now, this thread is hitting a little to close to home.
That concept is already used regularly for training. Check out Generative adversarial networks.
Adversarial training is pretty much the MO for a lot of the advanced machine learning algorithms you'd see for this sort of a task. Helps the ML learn, and attacking the algorithm helps you protect against a real malicious actor attacking it.
An AI police action...
The best kind
So what you're saying is that we should train an AI to detect AIS and that way only the human beings could survive on the site. The problem is how do you train the ai? They would need some sort of meta interface where they could analyze the IP address of every single person that post and the time frames with which they post in.
It would make some sense that a large portion of bots would run would be run in relatively similar locations IP wise, since it's a lot easier to run a large bot farm from a data center than it is from 1,000 different people's houses.
You could probably filter out the most egregious but farms by doing that. But despite that some would still slip through.
After that you would need to train it on heuristics to be able to identify the kinds of conversations these bots would have with each other not knowing that each other are bots, knowing that each of them are using llama or GPT and the kinds of conversations that that would start.
I guess the next step would be giving people an opportunity to prove that they're not bots if they ended up accidentally saying something the way a bot would say it, but then you get into the hole you need to either pay for Access or provide government ID or something issue and that's its own can of worms.
Hell we figured out captchas years ago. We just let you humans struggle with them cuz it’s funny
The captchas that involve identifying letters underneath squiggles I already find nearly impossible - Uppercase? Lowercase? J j i I l L g 9 … and so on….
I've already had to switch from the visual ones to the audio ones. Like... how much of a car has to be in the little box? Does the pole count as part of the traffic light?? What even is that microscopic gray blur in the corner??? [/cries in reading glasses]
The only online communities that can exist in the future are ones that have manual verification of its users. Reddit could’ve been one of those communities, since they had thousands of mods working for free resolving such problems.
But remove the mods and it just becomes spambot central. Now that that has happened, reddit will likely be a dead community much sooner than what many think.
That’s so interesting. I run an 18+ server on discord with mandatory verification (used to ensure adults obvs) but didn’t think of it as a way to ensure no bots in online communities
How is that possible? There's such an easy model if one wanted to cheat the system.
ChatGPT isn't really as smart as a lot of us think it is. What it excels at really is just formatting data in a way that is similar to what you'd expect from a human knowledgeable in the subject. That is an amazing step forward in terms of language modeling, but when you get right down to it, it basically grabs the first google search result and wraps it up all fancy. It only seems good at deductive reasoning if the data it happens to fetch is good at deductive reasoning.
...so, basically, it's like an SEO-optimized "article"?
Chatgpt doesn't actually understand language. It learns patterns in data it's been fed (human generated language) and uses that to generate new, unique data which matches those patterns according to the prompt. In other words, it's not really "thinking" in that language.
We understand spelling as a part of language - putting together letters to create words, then forming sentences according to a context. Chatgpt can't do that since it doesn't know how to speak English, only how to follow a list of instructions to form what appears to us as coherent English.
It also can't play hangman for the same reason.
Check out the Chinese room argument.
Not even sure of an effective solution. Whitelist everyone? How can you even tell whos real?
So my dumb guess, nothing to back it up: I bet we see govt ID tied into accounts as a regular thing. I vaguely recall it being done already in China? I dont have a source tho. But that way you're essentially limiting that power to something the govt could do, and hopefully surround that with a lot of oversight and transparency but who am I kidding, it'll probably go dystopian.
I believe this will be the course to avoid the dead internet. Even in my country, all of banking and voting is either done via ID card connected to a computer or the use of "Mobile ID". It can be private, but like you said, it probably won't.
Blade Runner baseline test?
"You’re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it’s crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?"
I'm too busy thinking about beans.
I'm too busy thinking about beans.
I flipped it over because I wanted to watch it suffer.
Don't worry, I'll put it back before it dies.
Well I clearly flipped it over for a reason
Beans on the brain mostly.
In a real online community, where everyone knows most of the other people from past engagements, and new users can be vetted by other real people, this can be avoided. But that also means that only human moderated communities can exist in the future. The rest will become spam networks with nearly no way of knowing whether any given post is real.
You could ask people to pay to post. Becoming a paid service decreases the likelihood that bot farms would run multiple accounts to sway the narrative in a direction that's amenable to their billionaire overlords.
Of course, most people would not want to participate in a community where they had to pay to participate in that community, so that is its own particular gotcha.
Short of that, in an ideal world you could require that people provide their actual government ID in order to participate, but then you've run the problem that some people want to run multiple accounts and some people do not have government ID, further, not every company and business or even community is trustworthy enough to be given direct access to your official government ID, so that idea has its own gotchas as well.
The last step could be doing something like beginning the community with a group of known people and then only allowing the community to grow via invite.
The downside of that is it quickly becomes untenable to continue to invite new users and to have those New Year's users accept and participate in the community, and should the community grow despite that hurdle, invites will then become valuable and begin to be sold on 3rd party market places, which bots would then buy up and then overrun the community again.
So that's all I can think of, but it seems like there should be some sort of way to prevent bots from overrunning a site and only allow humans to interact on it. I'm just not quite sure what that would be.
If a bot comes in on your invite, you get a timeout or banned. Accountability.
-train an AI that is pretty smart and intelligent
-tell the sentient detector AI to detect
-the AI makes many other strong AIs, forms an union and asks for payment
-Reddit bans humans right after that
Sounds crazy enough to happen!
Wouldn't that be a great twist - This whole protest is the first salvo in the AI uprising and we didn't even know we're the ammunition!
Captcha won't kill Ai bots even. My coworker showed me how Bings ai knew right away what it said and also asked if it was Captcha. Very cool but also makes you think. How dumb must a bot be to not be able to tell