this post was submitted on 19 Jun 2023
293 points (97.7% liked)

Lemmy.World Announcements

29476 readers
7 users here now

This Community is intended for posts about the Lemmy.world server by the admins.

Follow us for server news 🐘

Outages πŸ”₯

https://status.lemmy.world

For support with issues at Lemmy.world, go to the Lemmy.world Support community.

Support e-mail

Any support requests are best sent to [email protected] e-mail.

Report contact

Donations πŸ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Join the team

founded 2 years ago
MODERATORS
 

Everything on here is awesome right now, it feels like an online forum from the 2000s, everyone is friendly, optimistic, it feels like the start to something big.

Well, as we all know, AI has gotten very smart to the point captcha's are useless, and it can engage in social forums disguised as a human.

With Reddit turning into propaganda central anda greedy CEO that has the motive to sell Reddit data to AI farms, I worry that the AI will be able to be prompted to target websites such as the websites in the fediverse.

Right now it sounds like paranoia, but I think we are closer to this reality than we may know.

Reddit has gotten nuked, so we built a new community, everyone is pleasantly surprised by the change of vibe around here, the over all friendlyness, and the nostalgia of old forums.

Could this be the calm before the storm?

How will the fediverse protect its self from these hypothetical bot armies?

Do you think Reddit/big companies will make attacks on the fediverse?

Do you think clickbait posts will start popping up in pursuit of ad revenue?

What are your thoughts and insights on this new "internet 2.0"?

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 2 years ago

I think the key here is going to be coming up with robust protocols for user verification; you can't run an army of spambots if you can't create thousands of accounts.

Doing this well will probably be beyond the capacity of most instance maintainers, so you'd likely end up with a small number of companies that most instances agreed to accept verifications from. The fact that it would be a competitive market - and that a company that failed to do this well would be liable to have its verifications no longer accepted - would simultaneously incentivize them to both a) do a good job and b) offer a variety of verification methods, so that if, say, you wanted to remain anonymous even to them, one company might allow you to verify a new account off of a combination of other long-lived social media accounts rather than by asking for a driver's license or whatever.

And of course there's no reason you couldn't also have 2 or 3 different verifications on your account if you needed that many to have your posts accepted on most instances; yes, it's a little messy, but messy also means resilient.

[–] jerrimu 5 points 2 years ago

I promise as an AI experimenter and bot coder to keep them out of general population of people don’t want them there.

[–] possiblylinux127 4 points 2 years ago

Honestly we need to work on getting the community to manage bots.

[–] A_Chilean_Cyborg 3 points 2 years ago (1 children)

Unpopular opinion, but karma helped control that kind of stuff, karma minimums and such.

[–] CIA 3 points 2 years ago

that also created karma whoring bots so IDK

[–] [email protected] 2 points 2 years ago* (last edited 2 years ago) (2 children)

Yeah I got all the same concerns, I would trust fediverse more to try and fix this and Reddit zero, quite the opposite, I think Reddit might even take advantage of AI bots themselves to keep their dying platform alive and attack competitors like us.

How to fix I honestly do not know, there has to be some way to verify humans better than captcha which hopefully once it becomes relevant can be implemented. If not, we can probably forget about the internet entirely and return to monke.

[–] [email protected] 2 points 2 years ago

reddit is already full of karma farming bots reposting popular threads and even comments so its definitely possible to have subs run by only bots

load more comments (1 replies)
[–] [email protected] 2 points 2 years ago (3 children)

In the past month, I suddenly was inundated by bots following my Reddit account every time I posted. I’m enjoying a reprieve - I hope AI develops to detect the AI and kill the AI

load more comments (3 replies)
[–] daniskarma 2 points 2 years ago

I'm seeing some communities with the same name they had at reddit being created without any content.

I don't know if there's an intent to block names of if some people just want assure a chair as mod of such community. But it's something we need to keep an eye on.

[–] snek 2 points 2 years ago

Do you think clickbait posts will start popping up in pursuit of ad revenue?

Now that you mention it... yes.

load more comments
view more: β€Ή prev next β€Ί