Aedis

joined 1 year ago
[–] Aedis 15 points 9 months ago (6 children)

This data needs to be normalized by speed or realistic range/day. Otherwise it's pretty meaningless.

[–] Aedis 2 points 9 months ago

I'm not sure that's correct though because then there would be no difference between sound and fjord. Also by that definition would the Puget Sound not be a sound?

Fjord: Valley created by glacier flow that is the filled with water to create islands

Sound: Natural Valley produced by underground movement filled with water to create islands

[–] Aedis 1 points 9 months ago

Are you asking cause this picture seems to say they are the same thing or because you want to know? Haha

[–] Aedis 1 points 9 months ago

Me on basically any post that has incorrect information from blahaj.

[–] Aedis 31 points 9 months ago

Tell me you're a Java developer without telling me you're a Java developer.

[–] Aedis 33 points 9 months ago (1 children)

You either die a hero or live long enough to become the villian something something.

[–] Aedis 5 points 9 months ago (3 children)

At least what I see with this experiment/article is that is overly verbose, he takes a long time to get to the point. And then when he does his methodology shows an experiment that cannot be verified. Even when something is "subjective" we can still draw conclusions from it if we set up proper non-subjective ways of evaluating the results we see (ie. Rubrics). The fact that he doesn't really say what leads him to say in detail what is a "terrible/v. bad/bad/good result" is a massive red flag in his method.

After seeing that, I no longer read the rest of it. Any conclusions drawn from a flawed methodology are inherently fallacies or hearsay.

If in any case it is further explained in the article and that somehow refutes what I've postulated later on, then I would have to say that the article is poorly written.

All this to say... I agree with you, not worth the read.

[–] Aedis 1 points 10 months ago
[–] Aedis 2 points 10 months ago* (last edited 10 months ago) (3 children)

Source please?

[–] Aedis 3 points 10 months ago (1 children)

What? Ballmer hasn't had anything to do with msft since 2014 man.

[–] Aedis 4 points 10 months ago

Software engineer here, but not llm expert. I want to address one of the questions you had there.

Why doesn't it occasionally respond with a hundred thousand word response? Many of the texts it's trained on are longer than it's usual responses.

An llm like chatgpt does some rudimentary level of pattern matching when it analyzes training data. So this is why it won't generate a giant blurb of text unless you ask it to.

Let's say for example one of its training inputs is a transcription of a conversation. That will be tagged "conversation" by a person. Then it will see that tag when analyzing hundreds of input texts that are conversations. Finally, the training algorithm writes down that "conversation" have responses of 1-2 sentences with x% likelyhood because that's what the transcripts did. Now if another of the training sets is "best selling novels" it'll store that "best selling novels have" responses" that are very long.

Chatgpt will probably insert a couple of tokens before your question to help it figure out what it's supposed to respond: "respond to the user as if you are in a casual conversation"

This will make the model more likely to output small answers rather than giving you a giant wall of text. However it is still possible for the model to respond with a giant wall of text if you ask something that would contradict the original instructions. (hence why jailbreaking models is possible)

[–] Aedis 2 points 10 months ago

😆❤️😢

view more: ‹ prev next ›