I've seen a lot of sentiment around Lemmy that AI is "useless". I think this tends to stem from the fact that AI has not delivered on, well, anything the capitalists that push it have promised it would. That is to say, it has failed to meaningfully replace workers with a less expensive solution - AI that actually attempts to replace people's jobs are incredibly expensive (and environmentally irresponsible) and they simply lie and say it's not. It's subsidized by that sweet sweet VC capital so they can keep the lie up. And I say attempt because AI is truly horrible at actually replacing people. It's going to make mistakes and while everybody's been trying real hard to make it less wrong, it's just never gonna be "smart" enough to not have a human reviewing its' behavior. Then you've got AI being shoehorned into every little thing that really, REALLY doesn't need it. I'd say that AI is useless.
But AIs have been very useful to me. For one thing, they're much better at googling than I am. They save me time by summarizing articles to just give me the broad strokes, and I can decide whether I want to go into the details from there. They're also good idea generators - I've used them in creative writing just to explore things like "how might this story go?" or "what are interesting ways to describe this?". I never really use what comes out of them verbatim - whether image or text - but it's a good way to explore and seeing things expressed in ways you never would've thought of (and also the juxtaposition of seeing it next to very obvious expressions) tends to push your mind into new directions.
Lastly, I don't know if it's just because there's an abundance of Japanese language learning content online, but GPT 4o has been incredibly useful in learning Japanese. I can ask it things like "how would a native speaker express X?" And it would give me some good answers that even my Japanese teacher agreed with. It can also give some incredibly accurate breakdowns of grammar. I've tried with less popular languages like Filipino and it just isn't the same, but as far as Japanese goes it's like having a tutor on standby 24/7. In fact, that's exactly how I've been using it - I have it grade my own translations and give feedback on what could've been said more naturally.
All this to say, AI when used as a tool, rather than a dystopic stand-in for a human, can be a very useful one. So, what are some use cases you guys have where AI actually is pretty useful?
LLMs are not AI. They do not reason. They have no agency. They have no memory. They aren't self-aware, or indeed, aware of anything at all.
The goal posts aren't moving, they just aren't an example of intelligence. You can argue that LLMs aquire and use knowledge, but they don't understand what you asked, or what they're saying. They're just creating a block of text that looks like what a human would write, based on statistics, one word at a time, using a prompt as a seed.
LLMs are just statistical models that generate realistic-looking output. It's an illusion of intelligence. A shadow of understanding. The people buying into their alleged abilities are wildly over-estimating them due to ignorance and apathy.
And that's true. But those would be properties of a general intelligence. So of course LLM are not a general intelligence.
LLM still implement a mastery of language, what in general is seen as an aspect of intelligence. These programs implementing just an aspect or a task are usually called narrow AI. It's still within the domain of AI.
Chess and checkers algorithms are also seen as the first implementation of AI. Very narrow AI of course and the intelligence didn't transfer well to other tasks.
I would argue that they do not. Picking statistically likely strings of words based on previous writings is not mastery, it's mimickry.
In order to have a mastery of language, one would first need to understand what the language represented, form an idea, then describe that idea using what they know about the concepts of that idea, and the understanding of language. LLMs do none of these things.
Chess and checkers algorithms are also not examples of intelligence. Again, they're just playing statistics based on their knowledge of the rules of the game, and the moves their opponents are known to deploy.
It's easy to see why that ability didn't translate well to any other task; The system had no concept of what they were doing, or how it might apply to other - also-unknowable - concepts.
A human can play chess and learn that they need to sacrifice pieces (losing a battle) to win the over all game (winning the war), and apply that to business or even other games. A human can do this because they understand each concept, both unto themselves, and in greater context of their overall experiences. A human also has the ability to think of these concepts in an abstract way, and adapt them to other contexts. These things are intelligence.
And your brain is full of neurons that biologically implement statistics and give an output based on previous things heard and read. Down to that level, it's still just statistics. Somehow that's different because it's biological.
And some of my colleagues are experts in mimicry. They don't really understand what they're doing, just saying or doing the same thing they were trained on over and over because they get a reward. If true understanding is the level, many humans would need to be excluded.
Hey, I'll be one of the first in line to suggest that our brains are not special, magical, impossible to create systems. We could probably approximate human-level ability with a few antagonistic models, an image processor, and (crucially) a simple body and locomotion routines (because I don't believe human-level intelligence is possible without being able to directly interface with the world).
My thesis - from my first post in this thread - is that this one system, acting on it's own, doing nothing but producing text, is not AI. It's not intelligence, because it doesn't know what it's saying, it's just spitting out (mathematically guided, syntactically-correct-looking, stolen-from-humans) random words.
Ok, let's check the dictionary.
So it would still be AI. Just not up to your standards. They really should make some level system, like the sae levels of automation.
Since we're already consulting the Oxford Dictionary;
It's great that they have a non-technical, linguist's supposition of what "AI" is, but if something is going to meet the standard of "Artificial Intelligence", I think it would first need to meet the definitions of "Artificial" (which is an easy test in this case) and "Intelligent" (see above).
I'm not talking about simulating intelligence, I'm talking about actually having it. In order to do that - as I said before - you need to be able to demonstrate understanding. LLMs do not understand things. They spit out random words, guided by a fancy algorithm. You can demonstrate this in real-time; Ask it a question, get an obviously wrong answer, then call it on it's own response. It will generate an apology, then give you a new answer. You can do this infinity. It's not even paying attention to itself, and you're suggesting that it has an understanding of what it's saying.
As to the definition you posted; Humans thinking they're so special that only they can do certain tasks, then being proven wrong, does not make another entity (a computer, in this case) more intelligent. It only proves that the task didn't require a human. This definition is based on a false equivalency (specifically: "if only a human can do something, it requires intelligence"). If this is the bar (which is set absurdly low), then computers achieved AI the first time a simple if/then statement was created (even though a human came up with the process, wrote the statement, and the process has no ability to adapt to new situations). You don't need intelligence (again, requiring understanding) to follow logic gates (and if you do, then basic circuit boards are also AI, so congratulations, we've had AI since the first AND gate was created in 1924).
Yes, that's why it's called artificial. It's not true intelligence, it's not natural intelligence, it's artificial, it's not real. Artificial is a synonym for fake in this case. LLM are fake intelligence, and anyone with some real intelligence can see it's fake. It's one of the issues AI developers have. To make the fake better, it needs exponentially more energy and data, exactly because it doesn't have understanding.
That always reminds me of the troubles the park rangers had in securing garbage, because "There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."
Hence why goal posts keep shifting. There are enough people that want to keep the special feeling. I'd say that self delusion is pretty human but LLMs can fake that pretty well too.
Something being artificial has no affect on it's qualifications of being - or not being - anything else. In this case: intelligent. I grant that it's artificial, but it's not intelligent, so it's not AI. It's... artificial non-intelligence.
And holy fuck, you started by trying to tell me I was moving the goal posts, you just strapped them to a rocket and blasted them to another planet.
My posts haven't moved an inch in 30 years. Every time some dumbass tech bro tries to sell AI (and this is - by far - not the first time) I've told people it's bullshit because they didn't create an intelligence; they just developed a shitty algorithm and slapped an AI label on it.
I can't continue this "debate" with you, since you're not conducting your end of it in good faith. You're making emotional arguments and trying to tell me they hold water for a technical definition. I guess your username checks out.
Hey, I'll be one of the first in line to suggest that our brains are not special, magical, impossible to create systems.