this post was submitted on 01 Aug 2023
525 points (82.1% liked)

Technology

59589 readers
6400 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

you are viewing a single comment's thread
view the rest of the comments
[–] sirswizzlestix 39 points 1 year ago (23 children)

These biases have always existed in the training data used for ML models (society and all that influencing the data we collect and the inherent biases that are latent within), but it’s definitely interesting that generative models now make these biases much much more visible (figuratively and literally with image models) to the lay person

[–] SinningStromgald 5 points 1 year ago (22 children)

But they know the AI's have these biases, at least now, shouldn't they be able to code them out or lessen them? Or would that just create more problems?

Sorry, I'm no programer so I have no idea if thats even possible or not. Just sounds possible in my head.

[–] dojan 24 points 1 year ago* (last edited 1 year ago) (11 children)

You don't really program them, they learn from the data provided. If say you want a model that generates faces, and you provide it with say, 500 faces, 470 of which are of black women, when you ask it to generate a face, it'll most likely generate a face of a black woman.

The models are essentially maps of probability, you give it a prompt, and ask it what the most likely output is given said prompt.

If she had used a model trained to generate pornography, it would've likely given her something more pornographic, if not outright explicit.


You've also kind of touched on a point of problem with large language models; they're not programmed, but rather prompted.

When it comes to Bing Chat, Chat GPT and others, they have additional AI agents sitting alongside them to help filter/mark out problematic content both provided by the user, as well as possible problematic content the LLM itself generates. Like this prompt, the model marked my content as problematic and the bot gives me a canned response, "Hi, I'm bing. Sorry, can't help you with this. Have a nice day. :)"

These filters are very crude, but are necessary because of problems inherent in the source data the model was trained on. See, if you crawl the internet for data to train it on, you're bound to bump into all sorts of good information; Wikipedia articles, Q&A forums, recipe blogs, personal blogs, fanfiction sites, etc. Enough of this data will give you a well rounded model capable of generating believable content across a wide range of topics. However, you can't feasibly filter the entire internet, among all of this you'll find hate speech, you'll find blogs run by neo nazis and conspiracy theorists, you'll find blogs where people talk about their depression, suicide notes, misogyny, racism, and all sorts of depressing, disgusting, evil, and dark aspects of humanity.

Thus there's no code you can change to fix racism.

if (bot.response == racist) 
{
    dont();
}

But rather simple measures that read the user/agent interaction, filtering it for possible bad words, or likely using another AI model to gauge the probability of an interaction being negative,

if (interaction.weightedResult < negative)
{
    return "I'm sorry, but I can't help you with this at the moment. I'm still learning though. Try asking me something else instead! 😊";
}

As an aside, if she'd prompted "professional Asian woman" it likely would've done a better job. Depending on how much "creative license" she gives the model though, it still won't give her her own face back. I get the idea of what she's trying to do, and there's certainly ways of acheiving it, but she likely wasn't using a product/model weighted to do specifically the thing she was asking to do.


Edit

Just as a test, because I myself got curious; I had Stable Diffusion generate 20 images given the prompt

professional person dressed in business attire, smiling

20 sampling steps, using DPM++ 2M SDE Karras, and the v1-5-pruned-emaonly Stable Diffusion model.

Here's the result

I changed the prompt to

professional person dressed in business attire, smiling, [diverse, diversity]

And here is the result

The models can generate non-white men, but it is in a way just a reflection of our society. White men are the default. Likewise if you prompt it for "loving couple" there'll be so many images of straight couples. But don't just take my word for it, here's an example.

[–] Blademax 4 points 1 year ago (2 children)

The Hands/digits...the horror....

[–] dojan 3 points 1 year ago (1 children)

It can do faces quite well on second passes but struggles hard with hands.

Corporate photography tends to be uncanny and creepy to begin with, so using an AI to generate it made it even more so.

I totally didn’t just spend 30 minutes generating corporate stock photos and laughing at the creepy results. 😅

[–] Blademax 1 points 1 year ago

Just glad there's a tell, for AI photo. Hope it never figure it out.

[–] MaxVerstappen 3 points 1 year ago

It's clearly biased against fingered folks!

load more comments (8 replies)
load more comments (18 replies)
load more comments (18 replies)