this post was submitted on 27 Dec 2024
57 points (86.1% liked)

Technology

60133 readers
2751 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 31 points 1 day ago* (last edited 1 day ago) (3 children)

The right tool for the right job. It's not intelligent, it is just trained. It all boils down to stochastic.

And then there is the ecological aspect...
Or sometimes the moral aspect, if it is used to manage someone's "fate" in application processing. And it might be trained to be racist or misogynist if you use the wrong training data.

[–] [email protected] 4 points 16 hours ago

Yeah. Considering the obscene resources needed for ChatGPT and the others, I don't think the niche use cases where they shine makes it worth it.

[–] rottingleaf 2 points 1 day ago (1 children)

The moral aspect is resolved if you approach building human systems correctly too.

There is a person or an organization making a decision. They may use an "AI", they may use Tarot cards, they may use the applicant's f*ckability from photos. But they are somehow responsible for that decision and it is judged by some technical, non-subjective criteria afterwards.

That's how these things are done properly. If a human system is not designed correctly, then it really doesn't matter which particular technology or social situation will expose that.

But I might have too high expectations of humanity.

[–] [email protected] 3 points 12 hours ago* (last edited 4 hours ago) (1 children)

Accountability of a human decision maker is the way to go. Agreed.

I see the danger when the accountant's job asks for high throughput which enforces fast decision making and the tool (llm) offers fast and easy decisions. What is the accountant going to do, if (s)he just sees cases instead of people and fates?

[–] rottingleaf 1 points 12 hours ago

If consequence for a mistake follows regardless, then it doesn't matter.

Or if you mean the person checking others - one can make a few levels of it. One can have checkers interested in different outcomes, like in criminal justice (... it's supposed to be).