this post was submitted on 22 Jun 2024
676 points (98.1% liked)
Technology
60115 readers
4479 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
A simple control algorithm "if temperature > LIMIT turnOffHeater" is AI, albeit an incredibly limited one.
LLMs are not AI. Please don't parrot marketing bullshit.
The former has an intrinsic understanding about a relationship based in reality, the latter has nothing of the likes.
I can see where you're getting at, LLM don't necessarily solve a problem, they just mímic patterns in data.
That is indeed exactly my point. LLMs are just a language-tailored expression of deep-learning, which can be incredibly useful, but should never be confused for any kind of intelligence (i.e. logical conclusions).
I appreciate that you see my point and admit that it makes some sense :)
Example where I think pattern recognition by deep learning can be extremely useful:
But what I am afraid is happening for people who do not see why a very simple algorithm is already AI, but consider LLMs AI, is that they mentally decide to call AI what seems "AGI" / "human-like". They mistake the patterns of LLMs for a conscious being and that is incredibly dangerous in terms of trusting the answers given by LLMs.
Why do I think they subconsciously imply (self-)awareness / conscience? Because to not consider as (very limited) AI a control mechanism like a simple room thermostat, is viewing it as "too simple" to be AI - which means that a person with such a view makes a qualitative distinction between control laws and "AI", where a quantitative distinction between "simple AI" and "advanced AI" would be appropriate.
And such a qualitative distinction that elevates a complex word guessing machine to "intelligence", that can only be made by people who actually believe there's understanding behind those word predictions.
That's my take on this.