this post was submitted on 07 Jun 2024
559 points (99.3% liked)

Technology

59092 readers
4762 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] dual_sport_dork 226 points 5 months ago* (last edited 4 months ago) (40 children)

Say it with me again now:

For fact-based applications, the amount of work required to develop and subsequently babysit the LLM to ensure it is always producing accurate output is exactly the same as doing the work yourself in the first place.

Always, always, always. This is a mathematical law. It doesn't matter how much you whine or argue, or cite anecdotes about how you totally got ChatGPT or Copilot to generate you some working code that one time. The LLM does not actually have comprehension of its input or output. It doesn't have comprehension, period. It cannot know when it is wrong. It can't actually know anything.

Sure, very sophisticated LLM's might get it right some of the time, or even a lot of the time in the cases of very specific topics with very good training data. But its accuracy cannot be guaranteed unless you fact-check 100% of its output.

Underpaid employees were asked to feed published articles from other news services into generative AI tools and spit out paraphrased versions. The team was soon using AI to churn out thousands of articles a day, most of which were never fact-checked by a person. Eventually, per the NYT, the website's AI tools randomly started assigning employees' names to AI-generated articles they never touched.

Yep, that right there. I could have called that before they even started. The shit really hits the fan when the computer is inevitably capable of spouting bullshit far faster than humans are able to review and debunk its output, and that's only if anyone is actually watching and has their hand on the off switch. Of course, the end goal of these schemes is to be able to fire as much of the human staff as possible, so it ultimately winds up that there is nobody left to actually do the review. And whatever emaciated remains of management are left don't actually understand how the machine works nor how its output is generated.

Yeah, I see no flaws in this plan... Carry the fuck on, idiots.

[โ€“] [email protected] 58 points 5 months ago (1 children)

Did you enjoy humans spouting bullshit faster than humans can debunk it? Well, brace for impact because here comes machine-generated bullshit! Wooooeee'refucked! ๐Ÿฅณ

[โ€“] dual_sport_dork 29 points 5 months ago (1 children)

To err is human. But to really fuck up, you need a computer.

[โ€“] [email protected] 5 points 4 months ago* (last edited 4 months ago) (1 children)

A human can only do bad or dumb things so quickly.

A human writing code can do bad or dumb things at scale, as well as orders of magnitude more quickly.

[โ€“] dual_sport_dork 2 points 4 months ago

And untangling that clusterfuck can be damn near impossible.

The reaper may not present his bill immediately, but he will always present his bill eventually. This is a zero-sum thing: There is no net savings because the work required can be front loaded or back loaded, and you sitting there at the terminal in the present might not know. Yet.

There are three phases where time and effort are input, and wherein asses can be bitten either preemptively or after the fact:

  1. Loading the algorithm with all the data. Where did all that data come from? In the case of LLM's, it came from an infinite number of monkeys typing on an infinite number of keyboards. That is, us. The system is front loaded with all of this time and effort -- stolen, in most cases. Also the time and effort spent by those developing the system and loading it with said data.
  2. At execution time. This is the classic example, i.e. the algorithm spits out into your face something that is patently absurd. We all point and laugh, and a screen shot gets posted to Lemmy. "Look, Google says you should put glue on your pizza!" Etc.
  3. Lurking horrors. You find out about the problem later. Much later. After the piece went to print, or the code went into production. "Time and effort were saved," producing the article or writing the code. Yes, they appeared to be -- then. Now it's now. Significant expenditure must be made cleaning up the mess. Nobody actually understood the code but now it has to be debugged. And somebody has to pay the lawyers.
load more comments (38 replies)