this post was submitted on 03 Feb 2024
220 points (94.0% liked)
Not The Onion
12302 readers
1176 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Comments must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Ok, maybe it helps to be more specific. We have an LLM which is based on a broad range of human data input, like news, internet chatter, stories but also books of all kinds including those about philosophy, diplomacy, altruism etc. But if the topic at hand is "conflict resolution" the overwhelming data will be about violent solutions. It's true that humans have developed means for peaceful conflict resolution. But at the same time they also have a natural tendency to focus on "bad news" so there is much more data available on the shitty things that happen in the world which is then fed to the chatbot.
To fix this, you would have to train an LLM specifically to have a bias towards educational resources and a moral code based on established principles.
But current implementations (like ChatGPT) don't work that way. Quite the opposite, in fact: In training, first we ingest all the data that we can get our hands on (including all the atrocities in the world) and then in a second step we fine-tune the LLM to make it "better".