this post was submitted on 27 Dec 2024
194 points (84.6% liked)

Technology

60217 readers
2772 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Great headline, but ask fusion how long they have been 20 years away and how many more years they have...

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 4 days ago (3 children)

AI is doing nothing. It's not sentient, it's not taking conscious decisions.

People is doing it.

[–] aesthelete 2 points 4 days ago* (last edited 4 days ago) (2 children)

An "AI" operated machine gun turret doesn't have to be sentient in order to kill people.

I agree that people are the ones allowing these things to happen, but software doesn't have to have agency to appear that way to laypeople and when people are placed in a "managerial" or "overseer" role they behave as if the software knows more than they do even when they're subject matter experts.

[–] [email protected] 2 points 4 days ago* (last edited 4 days ago) (1 children)

Would it be different if instead of LLM the AI operated machine gun or the corporate software where driven just by traditional algorithms when it comes to that ethical issue?

Because a machine gun does not need "modern" AI to be able to take aim and shoot at people, I guarantee you that.

[–] aesthelete 3 points 4 days ago* (last edited 4 days ago)

No, it wouldn't be different. Though it'd definitely be better to have a discernable algorithm / explicable set of rules for things like health care. Them shrugging their shoulders and saying they don't understand the "AI" should be completely unacceptable.

I wasn't saying AI = LLM either. Whatever drives Teslas is almost certainly not an LLM.

My point is half-baked software is already killing people daily, but because it's more dramatic to pontificate about the coming of skynet the "AI" people waste time on sci-fi nonsense scenarios instead of drawing any attention to that.

Fighting the ills bad software are already causing today would also do a lot to advance the cause of preventing bad software from reaching the imagined apocalyptic point in the future.