this post was submitted on 21 Sep 2024
52 points (78.9% liked)

Asklemmy

44119 readers
862 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 2 months ago (1 children)

No, unfortunately you are wrong.

Gpt4 is a better version of gpt3.

The brand new one that is allegedly "unhackable" just has a role hierarchy providing rules and that hasn't been fulled tested in the wild yet.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

First, did you read even the research papers?

Secondly, none are out that are actually immune to jailbreaking lol, Where did that claim come from?

Gpt4 is just an llm. Indeed the better version of gpt3

Gpt4o and 1o (claude-sonnet possibly also) rely on the generative capacities of the gpt4 model but there is allot more going under the hood that is not simply “generate the next token”

We all agree that a pure text predictor are not at all intelligent.

The discussion at hand is wether the current frontier of ai has moved the needle up. And i still would call it pretty dumb, but moving that needle, it did. Somewhere around (x2y0.5) if i have to use the meme. Stating its (0,0) just means people aren’t interested enough to pay attention, that these aren’t just llm anymore. That’s their right but i prefer people stopped joining the discussion so uninformed.