this post was submitted on 07 Aug 2023
534 points (98.0% liked)

Technology

60082 readers
4249 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Junk websites filled with AI-generated text are pulling in money from programmatic ads::More than 140 brands are advertising on low-quality content farm sites —and the problem is growing fast.

you are viewing a single comment's thread
view the rest of the comments
[–] Cabrio 11 points 1 year ago* (last edited 1 year ago) (1 children)

ChatGPT is a predictive text engine than can already generate more coherent and accurate information than 90% of Internet Contributers and it doesn't even have the capacity to add 5+5 together. I welcome our new overlords.

[–] Aceticon 5 points 1 year ago* (last edited 1 year ago) (1 children)

LLMs like ChatGTP are accurate only by chance, which is why you can't really trust the info contained in what they output: if your question ended up in or near a cluster in the "language token N-space" were a good answer is, then you'll get a good answer, otherwise you'll get whatever is closest in the language token N-space, which might very well be complete bollocks whilst delivered in the language of absolute certainty.

It is however likely more coherent that "90% of Internet Contributers" for just generated texts (not if you get to do question and answer though: just ask something from it and if you get a correct answer say that "it's not correct" and see how it goes).

This is actually part of the problem: in the stuff outputted by LLMs you can't really intuit the likely accuracy of a response from the gramatical coherence and word choice of the response itself: it's like being faced with the greatest politician in the World who is an idiot savant - perfect at memorizing what he/she heard and creating great speeches based on it whilst being a complete total moron at everything else including understanding the meaning of what he or she heard and just reshuffles and repeats to others.

[–] Cabrio 1 points 1 year ago* (last edited 1 year ago)

Oh I know. 54% of American adults read below a 6th grade comprehension level.

Most of the answers you get out of them are accurate only by chance, which is why you can't really trust the info contained in what they output: if your question ended up in or near a cluster in the "educated N-space" where a good answer is, then you get a good answer, otherwise you'll get whatever is the closest "response N-space", which is practically guaranteed to be complete bollocks whilst delivered in the language of absolute certainty.

I'd go on but I'm sure the point is made.