this post was submitted on 05 Mar 2024
45 points (94.1% liked)

Technology

59468 readers
4901 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The article discusses the mysterious nature of large language models and their remarkable capabilities, focusing on the challenges of understanding why they work. Researchers at OpenAI stumbled upon unexpected behavior while training language models, highlighting phenomena such as "grokking" and "double descent" that defy conventional statistical explanations. Despite rapid advancements, deep learning remains largely trial-and-error, lacking a comprehensive theoretical framework. The article emphasizes the importance of unraveling the mysteries behind these models, not only for improving AI technology but also for managing potential risks associated with their future development. Ultimately, understanding deep learning is portrayed as both a scientific puzzle and a critical endeavor for the advancement and safe implementation of artificial intelligence.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 8 months ago* (last edited 8 months ago)

This makes some very strong assumptions about what’s going on inside the model.

I explicitly marked the potential explanations as "hypotheses", acknowledging that this shit that I said might be wrong. So no, I am clearly not assuming (i.e. taking the dubious for certain).

We don’t know that we can think of concepts as being internally represented or that these concepts would make sense to humans. [implied: "you're assuming that LLMs represent concepts internally."]

The implication is incorrect.

"Concept" in this case is simply a convenient abstraction, based on how humans would interpret the output. I'm not claiming that the LLM developed them as an emergent behaviour. If the third hypothesis is correct it would be worth investigating that, but as I said, I'm placing my bets on the second one.

The focus of the test is to understand how the LLM behaves based on what we know that it handles (tokens) and something visible for us (the output).


Feel free to suggest other tests that you believe that might throw some light on the phenomenon from the article (LLM trained on English maths problems being able to solve them for French).