this post was submitted on 24 Aug 2023
115 points (93.2% liked)

BecomeMe

817 readers
2 users here now

Social Experiment. Become Me. What I see, you see.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BitSound 5 points 1 year ago (1 children)

I'd disagree, and go so far as to say that it's a baby AGI, and we need new terms to talk about the future of these approaches.

To start, "fancy autocomplete" is correct but useless, in the same way that saying the human brain is just a bunch of meat or the like. Assume that we built an autocomplete so good at its job that it knew every move you were about to make and every word you were about to speak. Yes, it's "just a fancy autocomplete", but one that must be backed by at least human-level intelligence. At some level of autocomplete ability, there must be a model backing it that can be called "intelligent", even if that intelligence looks nothing like human intelligence.

Similarly, the "fancy autocomplete" that is GPT-4 must have some amount of intelligence, and this intelligence is a baby AGI. When AGI is invoked, people tend to get really excited, but that's what the "baby" qualifier is for. GPT-4 is good at a large variety of tasks without extra training, and this is undeniable. You can quibble about what good means in this context, but it is able to handle simple tasks from "write some code" to "what are the key points in this document?" to "tell me a bedtime story" without being specifically trained to handle those tasks. That was unthinkable a year ago, and is clearly a sign of a model that has been able to generalize across many different tasks. Hence, AGI. It's not very good at a lot of those tasks (but surprisingly good at a lot of them), but it knows what the task is, and is trying its best. Hence, baby AGI.

Yeah, it's got a lot of limitations right now. But hardware is only getting cheaper, and we're developing techniques like Chain of Thought prompting that lets the LLMs have short-term working memory, which helps immensely. A linguist I know once said that the approaches we're taking are like building a ladder to the moon. Well, we've started building a hell of a ladder, and I'm excited to see where it takes us.

[–] [email protected] 5 points 1 year ago (1 children)

I don't care what yall call it, ai, agi, Stacy, it doesn't change the fact it was 100% trained on books tagged as "bed time stories" to tell you a bed time story, it couldn't tell you one otherwise.

Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.

Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.

https://en.m.wikipedia.org/wiki/Chinese_room

[–] BitSound 5 points 1 year ago (1 children)

Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.

But why? Also, "has free will" is exactly equivalent to "i cannot predict the behavior of this object". This is a whole separate essay, but "free will" is relative to an observer. Nobody thinks a rock has free will. Some people think cats have free will. Lots of people think humans have free will. This is exactly in line with how hard it is to predict the behavior of each. You don't have free will to an omniscient observer, but that observer must have above human-level intelligence. If that observer happens to have been constructed out of silicon, it doesn't really make a difference.

Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.

But it can. It uses its prior experience to produce novel output, much like humans do. Hell, I'd say most humans wouldn't pass your test for intelligence, and in fact they're just 3 LLMs in a trenchcoat.

https://en.m.wikipedia.org/wiki/Chinese_room

Yeah, the reality is that we've built a Chinese room. And saying "well, it doesn't really understand" isn't sufficient anymore. In a few years are you going to be saying "we're not really being oppressed by our robot overlords!"?

[–] [email protected] 1 points 1 year ago

I'm saying if there is anyone, including an omnipotent observer, that can predict a humans actions perfectly that is proof that freewill doesn't exist at all.