“…could lead to the disempowerment of humanity or even human extinction.”
Wonderful!
A place to discuss AI Safety, both reducing short term harm and preventing misaligned AGI.
We need more users to share content here so that the AI Safety community can grow! If you have anything to share about AI alignment, please drop a post :) We appreciate you!
“…could lead to the disempowerment of humanity or even human extinction.”
Wonderful!