this post was submitted on 21 Jan 2024
35 points (81.8% liked)
Technology
1878 readers
2 users here now
Post articles or questions about technology
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We should probably tease apart AGI and what I prefer to call “large-scale computing” (LLMs, SD, or any statistical ML approach
AGI has a pretty good chance of killing us all, or creating massive problems pretty much on its own. See: instrumental convergence.
Large-scale computing has the potential to cause all sorts of problems too. Just not the same kinds of problems as AGI.
I don’t think he sees LSC as an x-risk. Except maybe in the sense that a malicious actor who wants to provoke nuclear war could do so a bit more efficiently by using LSC, but it’s not like an LSC service is pulling a “War Games”.
What he’s proposing is:
And why not? LSC already poses big epistemic/economic/political/cultural problems on its own, even if nobody had any ambitions toward AGI.
Or C) Actually understand that alignment is a very hard problem we probably won't be able to solve in time.