this post was submitted on 15 Jul 2023
13 points (61.8% liked)

Technology

59675 readers
4836 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 14 points 1 year ago (3 children)

i'm still in the melanie mitchell school of thought. if we created an A.I. advanced enough to be an actual threat, it would need the analogous style of information processing that would allow machines to easily interpret instruction. there is no reasonable incentive for it to act outside of our instruction. don't anthropomorphise it with "innate desire to keep living even at the cost of humanity or anything else." we only have that due to evolution. i do not believe in the myth of stupid super-intelligence capable of being an existential threat.

[–] jrs100000 8 points 1 year ago (2 children)

The AIs we have been building so far have no motive at all. Really, the danger at this point is not that they will go rogue and kill us all. The danger is that they will do exactly as they are told, when someone tells them to kill us all. Not that they are anywhere close to having that capability.

[–] DigitalWebSlinger 6 points 1 year ago (1 children)

Consider a worse fate: they do exactly as we tell them to, until we become incapable of existing apart from them.

And then they break with no one to fix them.

[–] [email protected] 2 points 1 year ago

It’s a bit hard to imagine how all AIs could break simultaneously. Nothing short of a full-on apocalypse, and then fixing AIs would be the least of humanity’s problems.

And I’d guess in the future there would always be some local/open-source/offline AI that might be able to recreate (or help recreate) larger systems.