this post was submitted on 08 Sep 2023
350 points (95.3% liked)

Technology

59656 readers
2575 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT is losing some of its hype, as traffic falls for the third month in a row::August marked the third month in a row that the number of monthly visits to ChatGPT's website worldwide was down, per data from Similarweb.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 1 year ago (3 children)

I think (hope) that peroson is being facetious.

I hope people are smart enough to understand that the statistical sentence generators don't "know" anything.

[–] [email protected] 6 points 1 year ago

It can generate simple stuff accurately quite often. You just have to keep in mind that it could be dead wrong and you have to test/verify what it says.

Sonetimes I feel like a few lines of code should be doable in one line using a specific technique, so I ask it to do that and see what it does. I don't just take what it says and use it, I see how it tried to solve it and then check it. For example by looking up if the method it used exists and reading the doc for that method.

Exact same as what I would do if I saw someone on stack overflow or reddit recommending something.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

It’s just very quick at doing simple things you already could do - or doing things that you’d need to think about for a couple of minutes.

I wouldn’t trust it to do things I couldn’t achieve. But for stuff I could, it’s often much quicker. And I’m well equipped to check what it’s doing myself.

Statistical sentence generator gets thrown around so much, if anything I doubt people actually understand what can be achieved through just that. It doesn’t matter if it doesn’t know anything. If it can generate sentences statistically with a 100% correct and proficient outcome, it’d always be correct regardless of its lack of knowledge.

We’re not at 100%. But we’re not at 10% either.

[–] [email protected] 0 points 1 year ago (1 children)

A parrot can generate sentences with a 100% correct and proficient outcome, but it's just using sounds their owner taught them.

Garbage in, garbage out.

Even the smartest, most educated people are never 100% sure of anything, just because there's always nuances.

These engines are fed information that is written witg 100% surety, completely devoid of nuance. These engines will not produce "answers to questions" that are correct, because "correct" is fluid.

[–] [email protected] 1 points 1 year ago (1 children)

Meh.

That’s a very fallibilistic viewpoint. There are lots of certainties that can be answered correctly.

[–] [email protected] 0 points 1 year ago (1 children)

There are fields and fields in science that work on things that are "certainties".

If you're talking about simple stuff like "what is the first letter in the english alphabet", then sure. But many people, even in this thread, say they use the engines for hours, to get answers, guide them, and discuss.

It is a parrot on steroids, but even a parrot has knowledge. LLMs have 0% knowledge.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

Well, we are back at my earlier point. There is no need for knowledge if the statistical models are good enough.

A weather forecast does not have any knowledge whatsoever. It has data and statistical models. No one goes around dismissing them due to them not have any knowledge. Sure, we can be open to the fact that the statistical models are not perfect. But the models have gotten so good that they are used in people’s everyday life with rather high degree of certainty, they are used for hurricane warnings and whatnot, saving tens of thousands of life’s - if not more - yearly.

Your map app has no knowledge either. But it’s still amazing for knowing with a high degree of certainty how much time you’ll need from place A to B and which route will be shortest. Even taking live traffic into account. We could argue it’s just a parrot on steroid, that has been fed with billions of data points with some statistics on top, and say that it doesn’t know anything. But it’s such a useless point, because knowledge is not necessary if the data and statistical models are sound enough.

[–] [email protected] 0 points 1 year ago (1 children)

It is exactly my point.

None of the "predictive" apps pretend to have knowledge, to give you answers, to "think", to "hallucinate", to "give you wrong answers".

Everybody knows the weather app is "ballpark predictions", even though it's based on physical events that are measurable and extrapolatable.

Same with maps. People who follow maps 100% end up in lakes. The predictions the maps give are based on real-life measured data, topical for that particular frame of time.

With LLMs, the input is language. The output is language. It wraps the generated text in pleasantries to imitate knowledge. Unless it's fed 100% correct material (no such thing), the output is 100% bullshit that sounds about right; right enough to lure naive and, maybe, less IT-literate people to make them feel they're getting "correct" information.

Statistical engine. No knowledge. Garbage input, garbage output. No sign of "intelligence" whatsoever.

"asking" it questions is not carring about the "information" it returns.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

So you can feed a weather model weather data, but you cannot feed a language model, programming languages and get accurate predictions?

Basically no one is saying that “yeah I just go off the output, it’s perfect”. People use it to get a ballpark and then they work off that. Much like a meteorologist would do.

It’s not 100% or 0%. With imperfect data, we get imperfect responses. But that’s no difference from a weather model. We can still get results that are 50% or 80% accurate with less than 100% correct information. Given that a large enough amount of the data is correct.

[–] [email protected] 0 points 1 year ago (1 children)

Yeah, no difference between real-life physical measurements & data calculations made from proven formulas, and random shit collected of random places on the internet (even, possibly, random "LLM" generated sentences).

People do "just go off the output". There are people like that in this very thread.

Statements like "no difference" are just idiotic.

[–] [email protected] 1 points 1 year ago (1 children)

Of course there is. But weather forecasting have also gotten ridiculously much more accurate with time. Better data, better models. We’ll get there with language models as well.

I’m not arguing language models of today are amazingly accurate, I’m arguing they can be. That they are statistical models is not the problem. That they are new statistical models are.

[–] [email protected] 0 points 1 year ago

I’m not arguing language models of today are amazingly accurate, I’m arguing they can be. That they are statistical models is not the problem. That they are new statistical models are.

A broken clock is accurate twice in a day.

I'm arguing that they will never be accurate, because accuracy is not possible. I mean, look at Wikipedia. At least it's written by people.

Full self driving next year, right?

[–] demlet 0 points 1 year ago

You may be right now that I reread their comment.