this post was submitted on 08 Oct 2024
225 points (93.1% liked)

Technology

59120 readers
4322 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] zlatiah 6 points 4 weeks ago

So it was the physics Nobel... I see why the Nature News coverage called it "scooped" by machine learning pioneers

Since the news tried to be sensational about it... I tried to see what Hinton meant by fearing the consequences. Believe he is genuinely trying to prevent AI development without proper regulations. This is a policy paper he was involved in (https://managing-ai-risks.com/). This one did mention some genuine concerns. Quoting them:

"AI systems threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society. They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance"

like bruh people already lost jobs because of ChatGPT, which can't even do math properly on its own...

Also quite some irony that the preprint has the following quote: "Climate change has taken decades to be acknowledged and confronted; for AI, decades could be too long.", considering that a serious risk of AI development is climate impacts