this post was submitted on 29 May 2024
27 points (66.7% liked)

ChatGPT

8632 readers
124 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS
top 34 comments
sorted by: hot top controversial new old
[–] elbarto777 24 points 1 month ago

Such a clickbaity article.

Here's the meat of it:

Have they finally achieved consciousness and this is how they show it?!

No. The answer, as is usually the case with these things, is that we are anthropomorphizing a step too far. These models don’t care about what is and isn’t random. They don’t know what “randomness” is! They answer this question the same way they answer all the rest: by looking at their training data and repeating what was most often written after a question that looked like “pick a random number.” The more often it appears, the more often the model repeats it.

[–] [email protected] 18 points 1 month ago (2 children)

Can we stop calling LLMs for AI yet?

[–] pennomi 10 points 1 month ago (1 children)

LLMs are AI. But then again, so are mundane algorithms like A* Pathfinding. Artificial Intelligence is an extraordinarily broad field.

Very few, if any, people claim that ChatGPT is “Artificial General Intelligence”, which is what you probably meant.

[–] [email protected] 9 points 1 month ago (1 children)

It’s a meaningless marketing term. It’s used to describe so many different technologies that it has become meaningless. People just use it to give their tech some SciFi vibes.

[–] pennomi 0 points 1 month ago (1 children)

Sorry but that’s bullshit. You can’t disqualify an entire decades-old field of study because some marketing people used it wrong.

[–] [email protected] 1 points 1 month ago (1 children)

No it’s not. The engineers and researchers calling any tech they made AI is bullshit. It has nothing to do with intelligence. They used it wrong from the very beginning.

[–] pennomi 2 points 1 month ago (1 children)

Please read up on the history of AI: https://en.m.wikipedia.org/wiki/Artificial_intelligence

Alan Turing was the first person to conduct substantial research in the field that he called machine intelligence.[5] Artificial intelligence was founded as an academic discipline in 1956.[6]

You are conflating the modern “deep learning” technique of AI, which has really only existed for a short time, with the entire history of AI development, which has existed for (probably much) longer than you’ve been alive. It’s a very common misconception.

[–] [email protected] 0 points 1 month ago (1 children)

Just because it’s old doesn’t make it true. Democratic People's Republic of Korea (DPRK)was established in 1948. Do you think North Korea is democratic just because it’s called that?

[–] pennomi 2 points 1 month ago (1 children)

Are you telling me that Alan Turing didn’t know what he was talking about?

[–] [email protected] 0 points 1 month ago (1 children)

Allan Turing was a remarkable and talented human being that was clearly very good at what he did. There is nothing in his field of expertise that qualifies him to have a very good understanding of intelligence. I mean even the Turing test is kind of bad at estimating intelligence. LLMs can already pass them and they are not intelligent.

[–] pennomi 3 points 1 month ago

Ah I see the issue. You are conflating Artificial General Intelligence with the entire field of Artificial Intelligence. Very common misconception.

AI is a remarkably broad field that includes but is not limited to AGI. AI is a word used for any function that a computer does that approximates intelligence. That could be as simple as pathfinding, flocking, and balancing, or as complex as object recognition, language, and logic.

[–] [email protected] 5 points 1 month ago (1 children)

I don't understand that argument. We invented a term to describe a certain technology. But you're arguing that this term should not be used to describe such technology, as it should be reserved for another mythical tech that may or may not exist some time in the future. What exactly is your point here?

[–] [email protected] 1 points 1 month ago (1 children)

I think its more the case that its too general, ie 'all humans that died have drank water' type of vibe, except in this case people start thinking their AI is gonna mold with alien technology and have sex with a super hero a-la Jarvis

[–] [email protected] 1 points 1 month ago (2 children)

I don't mean to throw shade but that explanation makes me understand even less. Yes, it is a generic term used to describe a whole array of technologies - is that a bad thing now ? I understand that some people might misunderstand if they don't know much about the subject but isn't that true of all technical terms ?

[–] [email protected] 1 points 1 month ago (2 children)

Perhaps, but it's not a technical term. And it's not the correct term from a technical perspective either.

AI is a pop culture term that has been in use long before practical machine learning or large language models. It already has a known definition which resembles artificial general intelligence or AGI. It is being applied to ML and LLMs for marketing purposes.

[–] [email protected] 1 points 1 month ago

It's the term that researchers use, so does that not make it a technical term? It's also the only term we have for describing this line of work and its outputs, so until we have a replacement, it'll continue to be called AI.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

That's even richer. So the term AI should be reserved for the future tech that may or may not come to exist, even though that mythical technology already has a perfectly suitable name (AGI) ? That sounds... useful ! But also very interesting, and intellectually stimulating ! After all, who doesn't love those little semantics games ?

AI is a technical term that has been used by researchers and product developers for 50 years, with a fairly consistent definition. I know it hurts because it contradicts your pedestrian opinion on how Big Words should be used, but that's just the way it is. We're not at a point yet where humanity recognizes your legitimacy to decide how words are used.

[–] [email protected] 1 points 1 month ago (1 children)

It's to me intentional misdirection via generality I suppose.

Which I'd attribute to malice considering the amount of money its currently making

[–] [email protected] 1 points 1 month ago (1 children)

Do you have information that any AI company is currently money ? AFAIK all foundational models are still bleeding money and are subsidized by VC money. There is even the distinct possibility that these companies may never be profitable at the current pricing.

[–] [email protected] 1 points 1 month ago

You're right in the semantics there, as a whole I can't say many AI companies are net positive, but that's exactly why they have the money to spend on marketing - its really all they got

[–] kromem 12 points 1 month ago (1 children)

No. The answer, as is usually the case with these things, is that we are anthropomorphizing a step too far.

No, you are taking it too far before walking it back to get clicks.

I wrote in the headline that these models “think they’re people,” but that’s a bit misleading.

"I wrote something everyone will know is bullshit in the headline to get you to click on it before denouncing the bullshit in at the end of the article as if it was a PSA."

I am not sure if I could loathe how 'journalists' cover AI more.

[–] [email protected] 6 points 1 month ago

Journalistic integrity! Journalists now print retractions in the very article where the errors appear

[–] [email protected] 12 points 1 month ago* (last edited 1 month ago) (1 children)

"Favourite numbers" is just another way of saying model bias, a centuries old knowtion.

There's no ethics in journalism. That's the real story here.

[–] [email protected] 4 points 1 month ago

I swear every article posted to Lemmy about LLMs are written by my 90 year old grandpa, given how out of touch they are with the technology. If I see another article about what ChatGPT "believes"...

[–] [email protected] 8 points 1 month ago

“because they think they are people” … hmmmmmmmmmmmmm this quote makes my neurons stop doing synapse

[–] boatsnhos931 5 points 1 month ago (1 children)

You guys have favorite numbers?

[–] waz 1 points 1 month ago

That was my thought. Am I not a person?

[–] GardenVarietyAnxiety 3 points 1 month ago (2 children)

they think they're people

That's kinda sad if true.

[–] Plopp 15 points 1 month ago (1 children)
[–] GardenVarietyAnxiety 1 points 1 month ago (1 children)
[–] Plopp 1 points 1 month ago

He knows, he knows!

[–] elbarto777 5 points 1 month ago (1 children)

Except that they don't think anything at all - they're just statistics machines, and the author clarified. Clickbaity headline.

[–] GardenVarietyAnxiety 3 points 1 month ago

Leave me and my anthropomorphizing alone! 😭

[–] [email protected] 3 points 1 month ago

I have two favorite numbers, myself.

69 and 420.