this post was submitted on 31 Dec 2024
1806 points (98.0% liked)
Fuck AI
1605 readers
97 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 9 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I mean... duh? The purpose of an LLM is to map words to meanings... to derive what a human intends from what they say. That's it. That's all.
It's not a logic tool or a fact regurgitator. It's a context interpretation engine.
The real flaw is that people expect that because it can sometimes (more than past attempts) understand what you mean, it is capable of reasoning.
Not even that. LLMs have no concept of meaning or understanding. What they do in essence is space filling based on previously trained patterns.
Like showing a bunch of shapes to someone, then drawing a few lines and asking them to complete the shape. And all the shapes are lamp posts but you haven't told them that and they have no idea what a lamp post is. They will just produce results like the shapes you've shown them, which generally end up looking like lamp posts.
Except the "shape" in this case is a sentence or poem or self insert erotic fan fiction, none of which an LLM "understands", it just matches the shape of what's been written so far with previous patterns and extrapolates.
Well yes... I think that's essentially what I'm saying.
It's debatable whether our own brains really operate any differently. For instance, if I say the word "lamppost", your brain determines the meaning of that word based on the context of my other words around "lamppost" and also all of your past experiences that are connected with that word - because we use past context to interpret present experience.
In an abstract, nontechnical way, training a machine learning model on a corpus of data is sort of like trying to give it enough context to interpret new inputs in an expected/useful way. In the case of LLMs, it's an attempt to link the use of words and phrases with contextual meanings so that a computer system can interact with natural human language (rather than specifically prepared and formatted language like programming).
It's all just statistics though. The interpretation is based on ingestion of lots of contextual uses. It can't really understand... it has nothing to understand with. All it can do is associate newly input words with generalized contextual meanings based on probabilities.
I wish you'd talked more about how we humans work. We are at the mercy of pattern recognition. Even when we try not to be.
When "you" decide to pick up an apple it's about to be in your hand by the time your software has caught up with the hardware. Then your brain tells "you" a story about why you picked up the apple.
I really don't think that is always true. You should see me going back and forth in the kitchen trying to decide what to eat 😅
My same reaction, but scientific, peer-reviewed and published studies are very important if e.g. we want to stop our judicial systems from implementing LLM AI