this post was submitted on 18 Jun 2023
326 points (97.9% liked)
/kbin meta
639 readers
1 users here now
Magazine dedicated to discussions about the kbin itself. Provide feedback, ask questions, suggest improvements, and engage in conversations related to the platform organization, policies, features, and community dynamics. ---- * Roadmap 2023 * m/kbinDevlog * m/kbinDesign
founded 2 years ago
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I had this conversation on mastodon a month ago:
NSFW is too broad for a tag. We should move on from it and redefined content warning tag specific to the content.
https://tooting.ch/@aroom/110243044941547673
https://tooting.ch/@aroom/110243567677517996
(Sorry I can’t find the convo my instance muted mastodon.social so posts are not being displayed. It’s a mess)
Seriously… there are so many things considered occasionally NSFW. Curse words, nudity, violence, suicide, even on Reddit anything tobacco related. I am personally very tolerant of all content, but I have lines with both gore and porn that I just don’t want the content mixed in with my casual browsing. I wish everyone would just take it to lemmynsfw so I could filter it appropriately, but due to relaxed instance rules even a generally tame instance like kbin.social constantly has new specific groups for me to ban. Just today it has beenfurry cartoon porn, feet, celebs…
Reddit never got this right either, but it seems like such an easy problem to solve. There’s R-rated and there’s porn. Not sure if it’s a result of people being too prude or too loosey goosey, but it seems very obvious to me.
To make it obvious to everyone it could be simply coded in the app. You could choose a category when posting then choose a category as an user.
This would be great if the other federated communities tagged posts or followed guidelines, but they don't have to, which makes the attempt of censoring federated servers pointless. Just block the shit you don't like to see.
I think part of the point here is that communities that aren't following common, reasonable guidelines are ultimately going to be defederated by communities who care about those guidelines.
A more robust tagging system makes blocking what you don't want to see easier and less disruptive to your experience.
Any kind of tag you use to convey "sensitive content that may be offensive" will always become a meme. That's how human nature works and history proves it over and over.
Examples abound. Skulls and cross bones, the nuclear / atomic symbol, X ratings becoming XXX tags in porno titles, Parental Advisory Explicit Content .... this list keeps going on the closer you look. If a symbol, or a meme, is used to denote a warning, it will be co-opted by a subset of folks who will use it in ironic fashion. NSFW tags, "trigger warning" - all of these in the end are doomed from the very start to, at least in part, fail and have the exact opposite effect.
There's an interesting problem for nuclear researchers these days: how do you label a thing as dangerous in such a way that societies in ten thousand years will still recognize what it means? Because some of the shit they're toying with will be. They gotta think about it. Like even today the image of a skull means something very different depending on the culture that image is from.
That is an absolutely fascinating design question that's going to be living in my head now.
Agreed, violence and gore really bring me down.