this post was submitted on 24 Oct 2024
131 points (83.2% liked)

Technology

58973 readers
6774 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
131
submitted 6 days ago* (last edited 6 days ago) by [email protected] to c/technology
 

I've seen a lot of sentiment around Lemmy that AI is "useless". I think this tends to stem from the fact that AI has not delivered on, well, anything the capitalists that push it have promised it would. That is to say, it has failed to meaningfully replace workers with a less expensive solution - AI that actually attempts to replace people's jobs are incredibly expensive (and environmentally irresponsible) and they simply lie and say it's not. It's subsidized by that sweet sweet VC capital so they can keep the lie up. And I say attempt because AI is truly horrible at actually replacing people. It's going to make mistakes and while everybody's been trying real hard to make it less wrong, it's just never gonna be "smart" enough to not have a human reviewing its' behavior. Then you've got AI being shoehorned into every little thing that really, REALLY doesn't need it. I'd say that AI is useless.

But AIs have been very useful to me. For one thing, they're much better at googling than I am. They save me time by summarizing articles to just give me the broad strokes, and I can decide whether I want to go into the details from there. They're also good idea generators - I've used them in creative writing just to explore things like "how might this story go?" or "what are interesting ways to describe this?". I never really use what comes out of them verbatim - whether image or text - but it's a good way to explore and seeing things expressed in ways you never would've thought of (and also the juxtaposition of seeing it next to very obvious expressions) tends to push your mind into new directions.

Lastly, I don't know if it's just because there's an abundance of Japanese language learning content online, but GPT 4o has been incredibly useful in learning Japanese. I can ask it things like "how would a native speaker express X?" And it would give me some good answers that even my Japanese teacher agreed with. It can also give some incredibly accurate breakdowns of grammar. I've tried with less popular languages like Filipino and it just isn't the same, but as far as Japanese goes it's like having a tutor on standby 24/7. In fact, that's exactly how I've been using it - I have it grade my own translations and give feedback on what could've been said more naturally.

All this to say, AI when used as a tool, rather than a dystopic stand-in for a human, can be a very useful one. So, what are some use cases you guys have where AI actually is pretty useful?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 5 days ago (1 children)

LLMs are not AI. They do not reason. They have no agency. They have no memory. They aren't self-aware, or indeed, aware of anything at all.

The goal posts aren't moving, they just aren't an example of intelligence. You can argue that LLMs aquire and use knowledge, but they don't understand what you asked, or what they're saying. They're just creating a block of text that looks like what a human would write, based on statistics, one word at a time, using a prompt as a seed.

LLMs are just statistical models that generate realistic-looking output. It's an illusion of intelligence. A shadow of understanding. The people buying into their alleged abilities are wildly over-estimating them due to ignorance and apathy.

[–] SlopppyEngineer 2 points 5 days ago (1 children)

They do not reason. They have no agency. They have no memory. They aren’t self-aware, or indeed, aware of anything at all.

And that's true. But those would be properties of a general intelligence. So of course LLM are not a general intelligence.

LLM still implement a mastery of language, what in general is seen as an aspect of intelligence. These programs implementing just an aspect or a task are usually called narrow AI. It's still within the domain of AI.

Chess and checkers algorithms are also seen as the first implementation of AI. Very narrow AI of course and the intelligence didn't transfer well to other tasks.

[–] [email protected] 2 points 5 days ago (1 children)

LLM still implement a mastery of language

I would argue that they do not. Picking statistically likely strings of words based on previous writings is not mastery, it's mimickry.

In order to have a mastery of language, one would first need to understand what the language represented, form an idea, then describe that idea using what they know about the concepts of that idea, and the understanding of language. LLMs do none of these things.

Chess and checkers algorithms are also seen as the first implementation of AI. Very narrow AI of course and the intelligence didn't transfer well to other tasks.

Chess and checkers algorithms are also not examples of intelligence. Again, they're just playing statistics based on their knowledge of the rules of the game, and the moves their opponents are known to deploy.

It's easy to see why that ability didn't translate well to any other task; The system had no concept of what they were doing, or how it might apply to other - also-unknowable - concepts.

A human can play chess and learn that they need to sacrifice pieces (losing a battle) to win the over all game (winning the war), and apply that to business or even other games. A human can do this because they understand each concept, both unto themselves, and in greater context of their overall experiences. A human also has the ability to think of these concepts in an abstract way, and adapt them to other contexts. These things are intelligence.

[–] SlopppyEngineer 2 points 5 days ago (2 children)

And your brain is full of neurons that biologically implement statistics and give an output based on previous things heard and read. Down to that level, it's still just statistics. Somehow that's different because it's biological.

And some of my colleagues are experts in mimicry. They don't really understand what they're doing, just saying or doing the same thing they were trained on over and over because they get a reward. If true understanding is the level, many humans would need to be excluded.

[–] [email protected] 2 points 5 days ago (1 children)

Hey, I'll be one of the first in line to suggest that our brains are not special, magical, impossible to create systems. We could probably approximate human-level ability with a few antagonistic models, an image processor, and (crucially) a simple body and locomotion routines (because I don't believe human-level intelligence is possible without being able to directly interface with the world).

My thesis - from my first post in this thread - is that this one system, acting on it's own, doing nothing but producing text, is not AI. It's not intelligence, because it doesn't know what it's saying, it's just spitting out (mathematically guided, syntactically-correct-looking, stolen-from-humans) random words.

[–] SlopppyEngineer 1 points 4 days ago (1 children)

Ok, let's check the dictionary.

Artificial intelligence, noun, The capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this. In later use also: software used to perform tasks or produce output previously thought to require human intelligence, esp. by using machine learning to extrapolate from large collections of data. Also as a count noun: an instance of this type of software; a (notional) entity exhibiting such intelligence. Abbreviated AI.

So it would still be AI. Just not up to your standards. They really should make some level system, like the sae levels of automation.

[–] [email protected] 2 points 4 days ago (1 children)

Since we're already consulting the Oxford Dictionary;

Intelligence - The faculty of understanding; intellect. Also as a count noun: a mental manifestation of this faculty, a capacity to understand.

Intelligent - Having a high degree or good measure of understanding; quick to understand; knowing, sagacious.

It's great that they have a non-technical, linguist's supposition of what "AI" is, but if something is going to meet the standard of "Artificial Intelligence", I think it would first need to meet the definitions of "Artificial" (which is an easy test in this case) and "Intelligent" (see above).

I'm not talking about simulating intelligence, I'm talking about actually having it. In order to do that - as I said before - you need to be able to demonstrate understanding. LLMs do not understand things. They spit out random words, guided by a fancy algorithm. You can demonstrate this in real-time; Ask it a question, get an obviously wrong answer, then call it on it's own response. It will generate an apology, then give you a new answer. You can do this infinity. It's not even paying attention to itself, and you're suggesting that it has an understanding of what it's saying.

As to the definition you posted; Humans thinking they're so special that only they can do certain tasks, then being proven wrong, does not make another entity (a computer, in this case) more intelligent. It only proves that the task didn't require a human. This definition is based on a false equivalency (specifically: "if only a human can do something, it requires intelligence"). If this is the bar (which is set absurdly low), then computers achieved AI the first time a simple if/then statement was created (even though a human came up with the process, wrote the statement, and the process has no ability to adapt to new situations). You don't need intelligence (again, requiring understanding) to follow logic gates (and if you do, then basic circuit boards are also AI, so congratulations, we've had AI since the first AND gate was created in 1924).

[–] SlopppyEngineer 1 points 2 days ago (1 children)

LLMs do not understand things. They spit out random words, guided by a fancy algorithm.

Yes, that's why it's called artificial. It's not true intelligence, it's not natural intelligence, it's artificial, it's not real. Artificial is a synonym for fake in this case. LLM are fake intelligence, and anyone with some real intelligence can see it's fake. It's one of the issues AI developers have. To make the fake better, it needs exponentially more energy and data, exactly because it doesn't have understanding.

“if only a human can do something, it requires intelligence”). If this is the bar (which is set absurdly low)

That always reminds me of the troubles the park rangers had in securing garbage, because "There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists."

This definition is based on a false equivalency (specifically: “if only a human can do something, it requires intelligence”). It only proves that the task didn’t require a human. Humans thinking they’re so special

Hence why goal posts keep shifting. There are enough people that want to keep the special feeling. I'd say that self delusion is pretty human but LLMs can fake that pretty well too.

[–] [email protected] 1 points 1 day ago

LLMs do not understand things. They spit out random words, guided by a fancy algorithm. -AlexanderESmith

Yes, that's why it's called artificial. -SlopppyEngineer

Something being artificial has no affect on it's qualifications of being - or not being - anything else. In this case: intelligent. I grant that it's artificial, but it's not intelligent, so it's not AI. It's... artificial non-intelligence.

And holy fuck, you started by trying to tell me I was moving the goal posts, you just strapped them to a rocket and blasted them to another planet.

Hence why goal posts keep shifting.

My posts haven't moved an inch in 30 years. Every time some dumbass tech bro tries to sell AI (and this is - by far - not the first time) I've told people it's bullshit because they didn't create an intelligence; they just developed a shitty algorithm and slapped an AI label on it.

I can't continue this "debate" with you, since you're not conducting your end of it in good faith. You're making emotional arguments and trying to tell me they hold water for a technical definition. I guess your username checks out.

[–] [email protected] 1 points 5 days ago

Hey, I'll be one of the first in line to suggest that our brains are not special, magical, impossible to create systems.