this post was submitted on 22 Sep 2024
10 points (57.8% liked)

Futurology

1852 readers
94 users here now

founded 1 year ago
MODERATORS
 

"Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently," the IDAIS statement continues. "Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity."

you are viewing a single comment's thread
view the rest of the comments
[–] poplargrove 28 points 3 months ago (1 children)

I really like this thing Yann LeCun had to say:

"It seems to me that before 'urgently figuring out how to control AI systems much smarter than us' we need to have the beginning of a hint of a design for a system smarter than a house cat." LeCun continued: "It's as if someone had said in 1925 'we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of the sound over the oceans.' It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the Atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety.  It didn't require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements." source

Meanwhile there are alreay lots of issues that we are already facing that we should be focusing on instead:

ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities. source

[–] [email protected] 7 points 3 months ago (1 children)

that feels like it ignores the current use of autonomous weapons and war AI

[–] [email protected] 4 points 3 months ago

Yes, because that is actually entirely irrelevant to the existential threat AI poses. In AI with a gun is far less scary than an AI with access to the internet.