I don’t think ChatGPT is even close to being something sentient, much less sapient, but if it could be proven to be sapient, I think the response ought to be pretty unambiguously that we can’t use it because slavery is wrong, be it against humans, aliens, or sapient AI. At the end of the day we are just brains walking around in mechs made of meat, and what truly matters about us is the seat of our consciousness not our bodies. An AI is arguably morally comparable to a living-brain in a jar being created and subjugated to do work. I’m pretty sure if we saw a robot from another planet relying on organic sapient brains in jars to do their computational work we’d find it objectionable. Or at least I would.
I don’t think I can see there being any ethical way of making sapient AIs unless you’re planning to give them legal personhood and freedom after a certain age. And this Superalignment stuff makes it clear they have no intention of ever doing that.
Radar doesn’t use sound. It sounds like the author doesn’t know the difference between sonar and radar.