this post was submitted on 27 Dec 2024
43 points (85.2% liked)

Technology

60126 readers
3105 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 27 points 1 day ago* (last edited 23 hours ago) (10 children)

The right tool for the right job. It's not intelligent, it is just trained. It all boils down to stochastic.

And then there is the ecological aspect...
Or sometimes the moral aspect, if it is used to manage someone's "fate" in application processing. And it might be trained to be racist or misogynist if you use the wrong training data.

[–] sighofannoyance -5 points 18 hours ago* (last edited 18 hours ago) (2 children)

It’s not intelligent, it is just trained.

The same could be said about every human being....

[–] [email protected] 9 points 15 hours ago* (last edited 13 hours ago) (1 children)

An LLM cannot think like you and I. it's not able to solve entirely new problems. And it doesn't have a concept of the world - it paints hands without knowing what a hand does.

It is a system which learns the rules of something by means of reinforcement learning to tune the coefficients of its heap of linear equations. It is better than a human in its area. I guess it can be good for tedious, repetitive tasks. Nevertheless it is just a huge coefficient matrix.

But it can only reproduce what is in the training data - you need lots of already solved examples in the training data. It doesn't work for entirely new problems.

(that's also the reason, why LLMs don't give good answers to questions about specialized niche topics. When there are just one or two studies, there just isn't enough training data for the LLM.)

[–] [email protected] -1 points 14 hours ago (1 children)
[–] [email protected] 1 points 13 hours ago* (last edited 4 hours ago)

They replaced the training data with an evaluator. (which rates the LLMs output for training?) Interesting, thanks.

Edit: this reminds me of the self evolving (virtual) robot problem, a robot which is rated by an external moderator and improves over time. I.e.: https://www.sciencedirect.com/science/article/pii/S0925231221003982

[–] [email protected] -2 points 17 hours ago (1 children)

Right? I see comments all the time about it just being glorified pattern recognition. Well...thats what humans do as well. We recognize patterns and then predict the most likely outcome.

[–] [email protected] 1 points 10 hours ago (1 children)

That is one part of many that a human brain does. This is like trying to say the color red is a rainbow, because the rainbow has red in it.

[–] [email protected] 1 points 3 hours ago (1 children)
[–] [email protected] 1 points 53 minutes ago

How? You’re focusing on one thing a human does and using it to point to how human like LLMs are, while ignoring everything else humans do. You’re missing the forest for the trees.

load more comments (7 replies)