this post was submitted on 05 Mar 2024
45 points (94.1% liked)
Technology
59468 readers
4918 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It is not magic and all this "it's magic" discourse is IMO counter-productive. When a model does something interesting, people need to dig on what it's doing and why, for better models; and by "interesting" I mean both accurate and inaccurate (enough of this "it's hallu, move on!" nonsense).
And it's still maths and statistics. Yes, even if it's complex enough to make you lose track of. To give you an example, it's like trying to determine exactly the position of every atom of oxygen and silicon in a quartz crystal, to know how it should behave - it should be doable if not by the scale.
Now, explaining it: LLMs are actually quite good at translation (or at least better than other machine-based translation methods). Three things might be happening here:
I find #1 unlikely, #2 the most likely, but the one that would interest me the most is #3. It would be closer to how humans handle language; we don't really think too much by chaining morphemes ("tokens"), we mostly handle what those morphemes convey.
It would be far, far, far more interesting if this was coded explicitly into the model, but if it appeared as emergent behaviour it would be better than nothing.
Yep my sentiment entirely.
I had actually written a couple more paragraphs using weather models as an analogy akin to your quartz crystal example but deleted them to shorten my wall of text...
We have built up models which can predict what might happen to particular weather patterns over the next few days to a fair degree of accuracy. However, to get a 100% conclusive model we'd have to have information about every molecule in the atmosphere, which is just not practical when we have a good enough models to have an idea what is going on.
The same is true for any system of sufficient complexity.
What does any of that actually mean?
You download an LLM. Now what? How do you test this?
I was partially rambling so I expressed the three hypotheses poorly. A better way to convey it would be which set of tokens is the LLM using to solve the problem? 1. from French?, 2. from English?, or 3. neither?
In #1 and #2 it's still doing nothing "magic", it's just handling tokens as it's supposed to. In #3 it's using the tokens for something more interesting - still not "magic", but cool.
For maths problems, I don't know a way to test it. However, for general problems:
If the LLM is handling problems through the tokens of a specific language, it should fall for a similar "trap" as plenty monolinguals do, when 2+ concepts are conveyed through the same word and they confuse said concepts.
For example. Let's say that we train an LLM with the following corpuses:
Then we start asking it about free software, in both languages. Will the LLM be able to distinguish between both concepts?
This makes some very strong assumptions about what's going on inside the model. We don't know that we can think of concepts as being internally represented or that these concepts would make sense to humans.
Suppose a model sometimes seems to confuse the concept. There will be wrong examples in the training data. For all we know, it may have learned that this should be done if there was an uneven number of words since the last punctuation mark.
To feed text into an LLM, it has to be encoded. The normal schemes are for different purposes and not suitable. A text is broken down into tokens. A token can be a single character or an emoji, part of a word, or even more than a word. A token is represented by numbers and that's what the model takes as input and gives as output. A text, turned into numbers, is called an embedding.
The process of turning a text into an embedding is quite involved. It uses its own neural net. The numbers should already relate to the meaning. Because of the way these are trained, English words are often a single token, while words from other languages are dissected into smaller parts.
If an LLM "thinks" in tokens, then that's something it has learned. If it "knows" that a token has a language, then it has learned that.
I explicitly marked the potential explanations as "hypotheses", acknowledging that this shit that I said might be wrong. So no, I am clearly not assuming (i.e. taking the dubious for certain).
The implication is incorrect.
"Concept" in this case is simply a convenient abstraction, based on how humans would interpret the output. I'm not claiming that the LLM developed them as an emergent behaviour. If the third hypothesis is correct it would be worth investigating that, but as I said, I'm placing my bets on the second one.
The focus of the test is to understand how the LLM behaves based on what we know that it handles (tokens) and something visible for us (the output).
Feel free to suggest other tests that you believe that might throw some light on the phenomenon from the article (LLM trained on English maths problems being able to solve them for French).