this post was submitted on 10 Apr 2024
1280 points (99.1% liked)

Programmer Humor

18962 readers
516 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 60 points 4 months ago* (last edited 4 months ago) (29 children)

LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can't see the inner workings like the IF/THEN statements of ELIZA, and yet many people still were convinced that was talking to them. Humans are wired to anthropomorphize, often to a fault.

I say that while also believing we may yet develop actual AGI of some sort, which will probably use LLMs as a database to pull from. And what is concerning is that even though LLMs are not "thinking" themselves, how we've dived head first ignoring the dangers of misuse and many flaws they have is telling on how we'll ignore avoiding problems in AI development, such as the misalignment problem that is basically been shelved by AI companies replaced by profits and being first.

HAL from 2001/2010 was a great lesson - it's not the AI...the humans were the monsters all along.

[–] [email protected] 9 points 4 months ago (6 children)

I find that a lot of the reasons people put up for saying "LLMs are not intelligent" are wishy-washy, vague, untestable nonsense. It's rarely something where we can put a human and ChatGPT together in a double-blind test and have the results clearly show that one meets the definition and the other does not. Now, I don't think we've actually achieved AGI, but more for general Occam's Razor reasons than something more concrete; it seems unlikely that we've achieved something so remarkable while understanding it so little.

I recently saw this video lecture by a neuroscientist, Professor Anil Seth:

https://royalsociety.org/science-events-and-lectures/2024/03/faraday-prize-lecture/

He argues that our language is leading us astray. Intelligence and consciousness are not the same thing, but the way we talk about them with AI tends to conflate the two. He gives examples of where our consciousness leads us astray, such as seeing faces in clouds. Our consciousness seems to really like pulling faces out of false patterns. Hallucinations would be the times when the error correcting mechanisms of our consciousness go completely wrong. You don't only see faces in random objects, but also start seeing unicorns and rainbows on everything.

So when you say that people were convinced that ELIZA was an actual psychologist who understood their problems, that might be another example of our own consciousness giving the wrong impression.

[–] [email protected] 6 points 4 months ago (1 children)

Personally my threshold for intelligence versus consciousness is determinism(not in the physics sense... That's a whole other kettle of fish). Id consider all "thinking things" as machines, but if a machine responds to input in always the same way, then it is non-sentient, where if it incurs an irreversible change on receiving any input that can affect it's future responses, then it has potential for sentience. LLMs can do continuous learning for sure which may give the impression of sentience(whispers which we are longing to find and want to believe, as you say), but the actual machine you interact with is frozen, hence it is purely an artifact of sentience. I consider books and other works in the same category.

I'm still working on this definition, again just a personal viewpoint.

[–] [email protected] -1 points 4 months ago (1 children)

How do you know you're conscious?

[–] [email protected] 5 points 4 months ago (1 children)

I read this question a couple times, initially assuming bad faith, even considered ignoring it. The ability to change, would be my answer. I don't know what you actually mean.

load more comments (4 replies)
load more comments (26 replies)