this post was submitted on 28 May 2024
1060 points (96.9% liked)
Fuck AI
1455 readers
354 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 8 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My interpretation was that they're exactly talking about anthropomorphization, that's what we're good at. Put googly eyes on a random object and people will immediately ascribe it human properties, even though it's just three objects in a certain arrangement.
In the case of LLMs, the googly eyes are our language and the chat interface that it's displayed in. The anthropomorphization isn't inherently bad, but it does mean that people subconsciously ascribe human properties, like intelligence, to an object that's stringing words together in a certain way.
Ah, yeah you're right. I guess the part I actually disagree with is that it's the source of the hype, but I misconstrued the point because of the sub this was posted in lol.
Personally, (before AI pervaded all the spaces it has no business being in) when I first saw things like LLMs and image generators I just thought it was cool that we could make a machine imitate things previously only humans could do. That, and LLMs are generally very impersonal, so I don't think anthropomorphization is the real reason.
I mean, yeah, it's possible that it's not as important of a factor for the hype. I'm a software engineer, and even before the advent of generative AI, we were riding on a (smaller) wave of hype for discriminative AI.
Basically, we had a project which would detect that certain audio cues happened. And it was a very real problem, that if it fucked up once every few minutes, it would cause a lot of problems.
But when you actually used it, when you'd snap your finger and half a second later the screen turned green, it was all too easy to forget these objective problems, even though it didn't really have any anthropomorphic features.
I'm guessing, it was a combination of people being used to computers making objective decisions, so they'd be more willing to believe that they just snapped badly or something.
But probably also just general optimism, because if the fuck-ups you notice are far enough apart, then you'll forget about them.
Alas, that project got cancelled for political reasons before anyone realized that this very real limitation is not solvable.