this post was submitted on 23 May 2024
220 points (98.2% liked)

TechTakes

1490 readers
33 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 35 points 7 months ago (1 children)

it's almost like this thing has no internal conceptual representation! I know this can't possibly be, millions of promptfans and prompfondlers have told me it can't be so, but it sure does look that way! wild!

[–] [email protected] -5 points 7 months ago (3 children)

It must have some internal models of some things, or else it wouldn't be possible to consistently make coherent and mostly reasonable statements. But the fact that it has a reasonable model of things like grammar and conversation doesn't imply that it has a good model of literally anything else, which is unlike a human for whom a basic set of cognitive skills is presumably transferable. Still, the success of LLMs in their actual language-modeling objective is a promising indication that it's feasible for a ML model to learn complex abstractions.

[–] [email protected] 22 points 7 months ago (1 children)

if I copy a coherent sentence into my clipboard, my clipboard becomes capable of consistently making coherent statements

[–] [email protected] -5 points 7 months ago* (last edited 7 months ago) (2 children)

Yes, but that's not how LLMs work. My statement depends heavily on the fact that a LLM like GPT is coaxed into coherence by unsupervised or semi-supervised training. That the training process works is the evidence of an internal model (of language/related concepts), not just the fact that something outputs coherent statements.

[–] [email protected] 13 points 7 months ago* (last edited 7 months ago)

if I have a bot pick a random book and copy the first sentence into my clipboard, my clipboard becomes capable of consistently making coherent statements. unsupervised training 👍

[–] [email protected] 13 points 7 months ago

let me free up some of your time so you can go figure out how LLMs actually work

[–] [email protected] 15 points 7 months ago

It must have some internal models of some things, or else it wouldn’t be possible to consistently make coherent and mostly reasonable statements.

Talk about begging the question

[–] [email protected] 14 points 7 months ago (1 children)

it doesn't. that's why we're calling it “spicy autocompletion” .