this post was submitted on 30 Oct 2023
193 points (96.6% liked)

Technology

59106 readers
5643 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The executive order comes after a series of non-binding agreements with AI companies.

The order has eight goals: to create new standards for AI safety and security, protect privacy, advance equity and civil rights, stand up for consumers, patients, and students, support workers, promote innovation and competition, advance US leadership in AI technologies, and ensure the responsible and effective government use of the technology.

you are viewing a single comment's thread
view the rest of the comments
[–] fubo 6 points 1 year ago* (last edited 1 year ago)

Unfortunately this doesn't seem to address the "takeoff" problem: the use of AI to build more-capable AI, the creation of autonomous AI systems that can develop self-protection drives (see Omohundro 2008), etc.

AI systems should not be allowed to control economic resources until alignment is solved. As it stands, if a major company were to turn over its management to an autonomous AI system, there's a good chance that's game over for humans -- including the humans who made that decision.

The safety problem of autonomous AI systems able to (for instance) obtain their own resources or optimize their own code have been known since long before GPTs or deepfakes were a thing.

Unfortunately "AI safety" has largely been coopted to mean "stop humans from using deepfakes to bully or deceive other humans" rather than "stop fully-automated corporations from taking over the economy and running the planet with even less humane ethics even than human-run corporations do."

(Think selfishness or greed are a problem today? Consider a megacorp run by an entity that literally has no other drives but to protect and expand itself, thinks billions of times faster than any human board of directors, and cannot die. Say what you like about Bill Gates, he at least seems to enjoy curing diseases.)