this post was submitted on 13 Dec 2024
61 points (96.9% liked)

Fediverse

17833 readers
18 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS
 

The Fediverse is a great system for preventing bad actors from disrupting "real" human-human conversations, because all of the mods, developers and admins are all working out of a desire to connect people (as opposed to "trust and safety" teams more concerned about user retention).

Right now it seems that the Fediverses main protection is that it just isn't a juicy enough target for wide scale spam and bad faith agenda pushers.

But assuming the Fediverse does grow to a significant scale, what (current or future) mechanisms are/could be in place to fend off a flood of AI slop that is hard to distinguish from human? Even the most committed instance admins can only do so much.

For example, I have a feeling all "good" instances in the near future will eventually have to turn on registration applications and only federate with other instances that do the same. But it's not crazy to imagine that GPT could soon outmaneuver most registration questions which means registrations will only slow the growth of the problem but not manage it long-term.

Any thoughts on this topic?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 3 days ago

I have had similar thoughts, I think the answer ultimately lies in active mods that can really get to know a community and it's users and identify when users are pushing a narrative even if they can't confirm if they are a bot or not.

Also as @[email protected] pointed out, user registrations. On startrek.website we have a question that is easy for a star trek fan to answer but not easy for a bot (although getting back to your concern, chatGPT probably would have no problem)