this post was submitted on 24 Feb 2025
2493 points (99.6% liked)
Fediverse
30424 readers
2447 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to [email protected]!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Not friendica, which seems an obvious facebook alternative.
Also, I think they're onto something with their fuck it approach that every social media platform would benefit from. The internet was mostly that before. Content moderation primarily serves advertisers, it was never really for the people. Old internet anarchy was chaotic fun.
I'm lost, here. Do you not think fighting toxicity and hate speech is a valid and important function of moderation that's just as much or more for the sake of the people as it might be for advertisers?
I think the rise of hate speech on centralised platforms relies very heavily on their centralised moderation and curation via algorithms.
They have all known for a long time that their algorithms promote hate speech, but they know that curbing that behaviour negatively affects their revenue, so they don't do it. They chase the fast buck, and they appease advertisers who have a naturally conservative bent, and that means rage bait and conventional values.
That's quite apart from when platform owners explicitly support that hate speech and actively suppress left leaning voices.
I think what we have on decentralised systems where we curate/moderate for ourselves works well because most of that open hate speech is siloed, which I think is the best thing you can do with it.
I think that it's just words & images on a screen that we could easily ignore like people did before, and people are indulging a grandiose conceit by thinking that moderation is that important or serves any greater cause than the interests of moderators. On social media that seems to be to serve the consumers, by which I mean the advertisers & commercial interests who pay for the attention of users. While the old internet approach of ignoring, gawking at the freakshow, or ridiculing/flaming toxic & hateful shit worked fine then resulting in many people disengaging, ragequitting, or going outside to do something better, that's not great for advertisers protecting their brand & wanting to keep people pliant & unchallenged as they stay engaged in their uncritical filter bubbles & echo chambers.
With old internet, safety wasn't an internet nanny, thought police shit, and "stop burning my virgin eyes & ears". It was an anonymous handle, not revealing personally identifying information (a/s/l?), not falling for scams & giving out payment information (unless you're into that kinky shit). Glad to see newer social media returning to some of that.
I wholeheartedly agree, the only censorship should be in the individuals hands and only affects them. Aka blocking other users or content from being displayed on your own account. My moral compass does not need to be everyone's moral compass.
Toxicity doesn't "work fine," it's contagious and destructive. For projects, it slows progress. For communities in general, it reinforces bad behavior and pushes out newcomers, leading to more negative spaces, isolation, and stagnation, just off the top of my head. These were issues in older communities just as they are in modern ones.
I don't see why we should abandon moderation for your benefit, at the expense of people who care.
Your example of toxicity is linux maintainers resisting a newer programming language, not wanting to maintain additional bindings, and being stubborn about it? People decide whether to work & agree with each other, so what's your definition of toxicity here? How's moderation supposed to solve that: force people to agree & work together unwillingly? Seems rather authoritarian. People should only put words & images on a screen that someone approves? More authoritarian. And look at those imaginary problems we can solve!
This goes back to the grandiose conceit I wrote about earlier: some people can't get over themselves, take these words & images on a screen a bit too seriously, and feel they know better than others the right words & images to put on a screen, because of course they do. The rest of us know it's just a bunch of self-important crap that doesn't matter unless we make it matter, and we can ignore it or put our own words & images on a screen or go outside.
You streamed together a sequence of misunderstandings, fallacies and self-victimization into an incoherent pile of garbage that fails at actually responding to anything. Got it, got it, you're god's bravest warrior, resisting the authoritarianism of people who think others shouldn't be forced to tolerate your immaturity whenever you act like a cunt. I'll stop giving you attention now, so sorry.
The Internet was never supposed to have a central authority beyond the DNS tables.
Imagine traveling down a liminal space of tubes and the only signs are nondescript TLDs.
Lemmy has also taken over advertiser focused moderation patterns. A great example is NSFW. What is NSFW exactly? Not safe for work? Why is only that relevant?
NSFW is just used to mark advertiser unfriendly content. Why else group nakedness, violence, sexual content, and death in the same category?
It's way too vague to be useful, you have no idea if you're going to see a nipple or a murder.
Content warnings like on Mastodon are better, but don't provide a way to reliably filter out categories. I personally think it would be way better to have specific nested tags for certain types of material.
Are you new to the internet? NSFW literally means what it says: it's content that would not be safe for you to be viewing at work.
Advertising has nothing to do with it, which is why you still get ads on NSFW boards on 4chan; they're just NSFW ads.
If you work from home it becomes NSFH.