this post was submitted on 23 May 2024
220 points (98.2% liked)

TechTakes

1013 readers
237 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
all 32 comments
sorted by: hot top controversial new old
[–] BradleyUffner 52 points 1 month ago (2 children)

LLMs don't understand any words.

[–] [email protected] 16 points 1 month ago

yes. and you wouldn't believe¹ what's in the replies when you make this simple and obvious statement.

¹ who i am kidding. of course you know.

[–] MojoMcJojo 4 points 1 month ago (1 children)

I both agree and disagree. I think of them as golems. They do understand how to respond, but that's as deep as it goes. It's simulated understanding, but a very very good simulation... Okay maybe I do agree.

[–] BradleyUffner 18 points 1 month ago (1 children)

I think that at best you could say that they understand the relationship between tokens. But even that requires a really generous definition of the word "understand".

[–] Jimmyeatsausage 10 points 1 month ago (2 children)

There's a saying..."Knowledge is knowing a tomato is a fruit. Wisdom is knowing not to put it in fruit salad."

Meanwhile, LLMs are telling us to put glue on pizza so the cheese sticks. Even if the technology could eventually deliver on the promise, by the time we get there, nobody intelligent will trust it because the tech bros are, again, throwing half-baked garbage out into the world to try and be first to market.

[–] [email protected] 3 points 1 month ago

I didn't trust it from the very moment of the announcement.

[–] [email protected] 2 points 1 month ago (1 children)

Well, so are humans. At least one human, 11 years ago, on reddit.

[–] Jimmyeatsausage 2 points 1 month ago (1 children)

Yes, but the general population doesn't expect shitposts from their Google search. When I'm reading a meme community I want shitposts. When I'm googling recipies, I'm looking for reliable instructions on how to make dinner. It's all part of the whole "LLMs don't know what they're saying" issue.

[–] [email protected] 1 points 1 month ago

Yeah, fair.

[–] [email protected] 35 points 1 month ago (1 children)

it's almost like this thing has no internal conceptual representation! I know this can't possibly be, millions of promptfans and prompfondlers have told me it can't be so, but it sure does look that way! wild!

[–] [email protected] -5 points 1 month ago (3 children)

It must have some internal models of some things, or else it wouldn't be possible to consistently make coherent and mostly reasonable statements. But the fact that it has a reasonable model of things like grammar and conversation doesn't imply that it has a good model of literally anything else, which is unlike a human for whom a basic set of cognitive skills is presumably transferable. Still, the success of LLMs in their actual language-modeling objective is a promising indication that it's feasible for a ML model to learn complex abstractions.

[–] [email protected] 21 points 1 month ago (1 children)

if I copy a coherent sentence into my clipboard, my clipboard becomes capable of consistently making coherent statements

[–] [email protected] -5 points 1 month ago* (last edited 1 month ago) (2 children)

Yes, but that's not how LLMs work. My statement depends heavily on the fact that a LLM like GPT is coaxed into coherence by unsupervised or semi-supervised training. That the training process works is the evidence of an internal model (of language/related concepts), not just the fact that something outputs coherent statements.

[–] [email protected] 13 points 1 month ago

let me free up some of your time so you can go figure out how LLMs actually work

[–] [email protected] 12 points 1 month ago* (last edited 1 month ago)

if I have a bot pick a random book and copy the first sentence into my clipboard, my clipboard becomes capable of consistently making coherent statements. unsupervised training 👍

[–] [email protected] 14 points 1 month ago

It must have some internal models of some things, or else it wouldn’t be possible to consistently make coherent and mostly reasonable statements.

Talk about begging the question

[–] [email protected] 14 points 1 month ago (1 children)

it doesn't. that's why we're calling it “spicy autocompletion” .

[–] [email protected] 30 points 1 month ago (1 children)

Ha, I love the sauce on that headline.

[–] [email protected] 5 points 1 month ago (1 children)

It's not the headline used by the publication.

[–] [email protected] 19 points 1 month ago

yes, this is the anti-HN

[–] [email protected] 13 points 1 month ago (1 children)

it seems like it's not the worst way to write text if I don't want to allow an ai to parse my messages...

[–] [email protected] 13 points 1 month ago (1 children)

not being not sure to fail to not write like this could become the opposite of interesting after a time that isn't long, though

[–] [email protected] 8 points 1 month ago

Wow... It's not easy trying not to misunderstand sentences...