this post was submitted on 05 Mar 2024
45 points (94.1% liked)
Technology
59468 readers
4901 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I explicitly marked the potential explanations as "hypotheses", acknowledging that this shit that I said might be wrong. So no, I am clearly not assuming (i.e. taking the dubious for certain).
The implication is incorrect.
"Concept" in this case is simply a convenient abstraction, based on how humans would interpret the output. I'm not claiming that the LLM developed them as an emergent behaviour. If the third hypothesis is correct it would be worth investigating that, but as I said, I'm placing my bets on the second one.
The focus of the test is to understand how the LLM behaves based on what we know that it handles (tokens) and something visible for us (the output).
Feel free to suggest other tests that you believe that might throw some light on the phenomenon from the article (LLM trained on English maths problems being able to solve them for French).