this post was submitted on 28 Jul 2023
1403 points (99.0% liked)

Technology

58031 readers
3748 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 160 points 1 year ago (7 children)

14 out of 15 requests were of black people. Facial recognition is notoriously bad with darker skin tones.

Racial Discrimination in Face Recognition Technology https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/

[–] [email protected] 52 points 1 year ago (2 children)

Actually, all 15 were of black people. 14 were of black men, one was a black woman.

[–] [email protected] 17 points 1 year ago

Zero arrests as well.

[–] Blamemeta 7 points 1 year ago

New Orleans is pretty black, but thats just impressive.

[–] [email protected] 39 points 1 year ago

Yeah, this same exact story keeps coming up for years now just with different names. Why anyone would think that both the ineffectiveness and racial bias in these systems either wouldn’t exist or will somehow go away eventually is beyond me. Just expensive and ineffective mass surveillance for the sake of it…

[–] [email protected] 19 points 1 year ago* (last edited 1 year ago) (1 children)

Who remembers the HP computer that was unable to identify black people? One of my favorite "oooph, that's not a good look" tech fails of all time. At least the people in that video were having a good laugh about it.

https://www.youtube.com/watch?v=t4DT3tQqgRM

Holy hell, that was 13 years ago.

[–] T156 8 points 1 year ago (1 children)

More recently, there was also Google Photos mistaking a photo of a black couple as "gorillas", back in 2015.

https://www.bbc.com/news/technology-33347866

On a funnier note, there was also the AI tool turning a pixelated photo of Barack Obama into that of a white man.

https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias

[–] FlyingSquid 1 points 1 year ago

Haha. He looks like Mike Nelson.

[–] [email protected] 18 points 1 year ago (1 children)

Minor correction.
15 out of 15 requests were of black people. 14 of those requests were black men and 1 was a black woman.

[–] Blamemeta 6 points 1 year ago* (last edited 1 year ago)

Yeah. Basicly anything with a lower contrast, with shadows and backgrounds. And because shadows are dark, they have a lower contrast with other dark things.

[–] [email protected] -3 points 1 year ago (5 children)

Discrimination is the wrong word. Technology has no morals or sense of justice. It is bias in the data that developers should have accounted for.

[–] [email protected] 11 points 1 year ago

It's totally accurate though. It's like the definition of systemic racism really. Think about housing or financial policy that disproportionately fails for minorities. They aren't some Klan manifesto. Instead they just include banal qualifications and exemptions that end up at the same result.

[–] slumberlust 8 points 1 year ago (1 children)

This seems shortsighted. You are basically asking people to police their own biases. That's a tall ask for something no one can claim immunity from.

[–] [email protected] 1 points 1 year ago (1 children)

I am asking a group of scientists who should be very well-versed in statistics and weights, you know, one of the biggest components in a machine learning model, to account for how biased their data is when engineering their model.

It's really not a hard ask.

[–] Cortell 1 points 1 year ago

So in other words technology is just as biased as the people who designed it

[–] Cortell 7 points 1 year ago

Ask the people who create the data sets that machine learning models train on how they feel about racism and get back to us

[–] Smokeless7048 6 points 1 year ago (1 children)

It can be an imported bias/descrimination. I still think that words fair.

Do you have a more accurate word?

[–] [email protected] 2 points 1 year ago

I already said it: bias. It's a common problem with LLMs and other machine learning models that model engineers need to watch out for.

[–] HardlightCereal -2 points 1 year ago (1 children)

You need to learn some critical race theory. Racist systems turn innocent intentions into racist actions. If a PhD student trains an AI model on only white people because the university only has white students, then that AI model is going to fail black people because black people were already failed by university admissions. Innocent intention plus racist system equals racist action.

[–] [email protected] 1 points 1 year ago

Even CRT would call this "racial bias", which is exactly what this is.