this post was submitted on 09 Jan 2025
176 points (98.4% liked)

Technology

61313 readers
2322 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

It's pretty easy to see the problem here: The Internet is brimming with misinformation, and most large language models are trained on a massive body of text obtained from the Internet.

Ideally, having substantially higher volumes of accurate information might overwhelm the lies. But is that really the case? A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn't identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.

you are viewing a single comment's thread
view the rest of the comments
[–] pennomi 5 points 3 weeks ago (3 children)

Even curation seems unlikely to fix the problem. I bet a new algorithm is required that allows LLMs to validate their response before it’s returned. Basically an “inner monologue” to avoid saying stupid things.

[–] [email protected] 4 points 3 weeks ago

These models are so shit they need a translator. Hilarious.

[–] [email protected] 3 points 3 weeks ago

I could use one of those...

[–] [email protected] 2 points 3 weeks ago

validate against what? The "inner monologue" is the llm itself. It won't be any better than itself.