this post was submitted on 10 Oct 2024
117 points (85.9% liked)

World News

39329 readers
1743 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News [email protected]

Politics [email protected]

World Politics [email protected]


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

We've had some trouble recently with posts from aggregator links like Google Amp, MSN, and Yahoo.

We're now requiring links go to the OG source, and not a conduit.

In an example like this, it can give the wrong attribution to the MBFC bot, and can give a more or less reliable rating than the original source, but it also makes it harder to run down duplicates.

So anything not linked to the original source, but is stuck on Google Amp, MSN, Yahoo, etc. will be removed.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 24 points 2 months ago (1 children)

In what way does having the MediaBiasFactCheck bot help with misinformation? It's not very accurate, probably less than the average Lemmy reader's preexisting knowledge level. People elsewhere in these comments are posting specific examples, in a coherent, respectful fashion.

Most misinformation clearly comes in the form of accounts that post a steady stream of "reliable" articles which don't technically break the rules, and/or in bad-faith comments. You may well be doing plenty of work on that also, I'm not saying you're not, but it doesn't seem from the outside like a priority in the way that the bot is. What is the use case where the bot ever helped prevent some misinformation? Do you have an example when it happened?

I'm not trying to be hostile in the way that I'm asking these questions. It's just very strange to me that there is an overwhelming consensus by the users of this community in one direction, and that the people who are moderating it are pursuing this weird non-answer way of reacting to the overwhelming consensus. What bad thing would happen if you followed the example of the !news moderators, and just said, "You know what? We like the bot, but the community hates it, so out it goes." It doesn't seem like that should be a complex situation or a difficult decision, and I'm struggling to see why the moderation team is so attached to this bot and their explanations are so bizarre when they're questioned on it.