this post was submitted on 23 Jul 2023
360 points (95.7% liked)

Technology

60116 readers
2683 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Pentagon AI more ethical than adversaries’ because of ‘Judeo-Christian society,’ USAF general says::The path to ethical AI is a “very important discussion” being held at DOD’s “very highest levels,” says service’s programs chief.

you are viewing a single comment's thread
view the rest of the comments
[–] fubo 65 points 1 year ago* (last edited 1 year ago) (6 children)

tl;dr: The headline is false; the general did not actually say that. I thought it sounded wrong, so I watched the video that the article linked to, to check. Sure enough, it was wrong. However, the reality may not be any more reassuring.


Hypothesis: Like, no, that's obviously wrong; either the headline is trash or the general made a whole tossed salad with mango sauce out of whatever the people working on it said. (stated before further investigation; stay tuned)


Updating: https://youtu.be/wn1yEovtYRM?t=3459


Okay, wow.

So the speaker is saying this at the end of the panel, in response to a question asking about the use of autonomous weapons.

They want to talk about who's trusted to make the decision of whether to employ lethal force in a combat situation: a human American soldier, who might be exhausted and not thinking clearly, or an algorithm that doesn't get tired.

And one thing they mention is that an enemy might not have ethics that would lead them them even care about that distinction. And they express that as "Judeo-Christian morality".

That doesn't sit right with me. It sounds to me, in that moment, like they're implying that people from other cultures could be less moral, and that we should be willing to be more free with our weapons towards such people. That sounds to me like the sort of bullshit that came out of the Vietnam War.

But the rest of the answer sounds like they're trying to point at the problem of making command decisions in scenarios where the opponent might deploy autonomous weapons first. If the enemy has already handed decision-making over to an algorithm, how does that affect what we should do?

And they're maybe expressing that to their expected audience — mind you, the Air Force is heavily infiltrated by far-right Christian radicals — in a way that they hope makes sense.


Conclusion: The headline is incorrect; the general did not actually say that a Pentagon AI would be more ethical for any reason; he was talking about the human ethical decision of whether to trust AI to make decisions. But what he did say is complicated and scary for different reasons, including the internal culture of the US Air Force.

load more comments (2 replies)