this post was submitted on 10 Dec 2024
629 points (97.7% liked)
Showerthoughts
30786 readers
689 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted, clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts: 1
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- If you feel strongly that you want politics back, please volunteer as a mod.
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report the message goes away and you never worry about it.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I wish there were a fact checking website that allowed checking any article and calculating scores e.g how many claims are linked, where do the links point to (available or not), are the linked pages trust-worthy themselves, detecting link circles ( A -> B -> C -> A), and so on. Or at least something that provided us the tools to do community fact-checking in the open.
You basically described the PageRank system, but at an article level. I suppose it's theoretically possible with LLM tools, but not an easy task. It also has a pretty big gap of how to define a source as trustworthy.
But it might be doable on a simpler level - if you were to ask the AI if an article's claims match other sources, you might at least find the outliers.