this post was submitted on 28 Jul 2023
183 points (94.6% liked)

Technology

58135 readers
5330 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago (4 children)

"As a result, no one on Earth fully understands the inner workings of LLMs. Researchers are working to gain a better understanding, but this is a slow process that will take years—perhaps decades—to complete."

Maybe I missed it in the article, but can someone please explain-like-i'm-5 how this is possible.

It's not like we are interacting with a biologic with mysterious chemistry. Everything about LLMs are completely man-made.

[–] lily33 5 points 1 year ago* (last edited 1 year ago)

It's not it's biological origins that make it hard to understand the brain, but the complexity. For example, we understand how the heart works pretty well.

While LLMs are nowhere near as complex as a brain, they're complex enough to make it extremely difficult to understand.

But then there comes the question: if they're so difficult to understand, how did people make them in the first place?

The way they did it actually bears some similarities to evolution. They created an "empty" model - a large neural network that wasn't doing anything useful or meaningful. But it depended on billions of parameters, and if you tweak a parameter, its behavior changes slightly.

Then they expended enormous amount of computing power tweaking parameters, each tweak slightly improving its ability to model language. While doing this, they didn't know what each number meant. They didn't know how or why each tweak was improving the model. Just that each tweak was making an improvement.

Unlike evolution, each tweak isn't random. There's an algorithm called back-propagation that can tell you how to tweak the neural network to make it predict some known data slightly better. But unfortunately it doesn't tell you anything about the "why" this tweak is good, or "what" each parameter change means. Hence why we don't understand how LLMs work.

One final clarification: It's not a complete black box. We do have some understanding of how LLM works, mostly on high level. Kind of like we have some basic understanding of how a brain works. We understand LLMs much better than brains, of course.

load more comments (3 replies)