360
Pentagon AI more ethical than adversaries’ because of ‘Judeo-Christian society,’ USAF general says
(www.defenseone.com)
This is a most excellent place for technology news and articles.
tl;dr: The headline is false; the general did not actually say that. I thought it sounded wrong, so I watched the video that the article linked to, to check. Sure enough, it was wrong. However, the reality may not be any more reassuring.
Hypothesis: Like, no, that's obviously wrong; either the headline is trash or the general made a whole tossed salad with mango sauce out of whatever the people working on it said. (stated before further investigation; stay tuned)
Updating: https://youtu.be/wn1yEovtYRM?t=3459
Okay, wow.
So the speaker is saying this at the end of the panel, in response to a question asking about the use of autonomous weapons.
They want to talk about who's trusted to make the decision of whether to employ lethal force in a combat situation: a human American soldier, who might be exhausted and not thinking clearly, or an algorithm that doesn't get tired.
And one thing they mention is that an enemy might not have ethics that would lead them them even care about that distinction. And they express that as "Judeo-Christian morality".
That doesn't sit right with me. It sounds to me, in that moment, like they're implying that people from other cultures could be less moral, and that we should be willing to be more free with our weapons towards such people. That sounds to me like the sort of bullshit that came out of the Vietnam War.
But the rest of the answer sounds like they're trying to point at the problem of making command decisions in scenarios where the opponent might deploy autonomous weapons first. If the enemy has already handed decision-making over to an algorithm, how does that affect what we should do?
And they're maybe expressing that to their expected audience — mind you, the Air Force is heavily infiltrated by far-right Christian radicals — in a way that they hope makes sense.
Conclusion: The headline is incorrect; the general did not actually say that a Pentagon AI would be more ethical for any reason; he was talking about the human ethical decision of whether to trust AI to make decisions. But what he did say is complicated and scary for different reasons, including the internal culture of the US Air Force.
Folks can go watch it and see. No need to be a butt about it.
Don't mind that turd. You took the time to do a thoughtful breakdown. It is a subtle nuance whether "Pentagon AI would be more ethical" or "AI managed by Pentagon staff would be used more ethically" and you were right to point it out. The headline could be accused of oversimplifying or clickbaiting but I don't think it was intentionally falsifying claims. The real story, as you pointed out, is the sense of righteousness and declaring a moral high ground based on any religion.
I think the general's point stands, though.
No matter what ethical system you might be using (to decide whether to turn over control of a combat situation to AI), the enemy might be using a different one, and come to different conclusions; and that in turn affects what conclusions you should come to.
This is actually a decision theory issue; and that's something that military strategists do study.