this post was submitted on 22 Feb 2024
488 points (96.2% liked)

Technology

60018 readers
3033 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

you are viewing a single comment's thread
view the rest of the comments
[–] xantoxis 130 points 10 months ago (88 children)

I don't know how you'd solve the problem of making a generative AI accurately create a slate of images that both a) inclusively produces people with diverse characteristics and b) understands the context of what characteristics could feasibly be generated.

But that's because the AI doesn't know how to solve the problem.

Because the AI doesn't know anything.

Real intelligence simply doesn't work like this, and every time you point it out someone shouts "but it'll get better". It still won't understand anything unless you teach it exactly what the solution to a prompt is. It won't, for example, interpolate its knowledge of what US senators look like with the knowledge that all of them were white men for a long period of American history.

[–] TORFdot0 30 points 10 months ago* (last edited 10 months ago) (6 children)

Edit: further discussion on the topic has changed my viewpoint on this, its not that its been trained wrong on purpose and now its confused, its that everything its being asked is secretly being changed. It's like a child being told to make up a story by their teacher when the principal asked for the right answer.

Original comment below


They’ve purposefully overrode its training to make it create more PoCs. It’s a noble goal to have more inclusivity but we purposely trained it wrong and now it’s confused, the same thing as if you lied to a child during their education and then asked them for real answers, they’ll tell you the lies they were taught instead.

[–] TwilightVulpine 17 points 10 months ago (2 children)

This result is clearly wrong, but it's a little more complicated than saying that adding inclusivity is purposedly training it wrong.

Say, if "entrepreneur" only generated images of white men, and "nurse" only generated images of white women, then that wouldn't be right either, it would just be reproducing and magnifying human biases. Yet this a sort of thing that AI does a lot, because AI is a pattern recognition tool inherently inclined to collapse data into an average, and data sets seldom have equal or proportional samples for every single thing. Human biases affect how many images we have of each group of people.

It's not even just limited to image generation AIs. Black people often bring up how facial recognition technology is much spottier to them because the training data and even the camera technology was tuned and tested mainly for white people. Usually that's not even done deliberately, but it happens because of who gets to work on it and where it gets tested.

Of course, secretly adding "diverse" to every prompt is also a poor solution. The real solution here is providing more contextual data. Unfortunately, clearly, the AI is not able to determine these things by itself.

[–] TORFdot0 5 points 10 months ago (1 children)

I agree with your comment. As you say, I doubt the training sets are reflective of reality either. I guess that leaves tampering with the prompts to gaslight the AI into providing results it wasn't asked for is the method we've chosen to fight this bias.

We expect the AI to give us text or image generation that is based in reality but the AI can't experience reality and only has the knowledge of the training data we provide it. Which is just an approximation of reality, not the reality we exist in. I think maybe the answer would be training users of the tool that the AI is doing the best it can with the data it has. It isn't racist, it is just ignorant. Let the user add diverse to the prompt if they wish, rather than tampering with the request to hide the insufficiencies in the training data.

[–] TwilightVulpine 5 points 10 months ago

I wouldn't count on the user realizing the limitations of the technology, or the companies openly admitting to it at expense of their marketing. As far as art AI goes this is just awkward, but it worries me about LLMs, and people using it expecting it to respond with accurate, applicable information, only to come out of it with very skewed worldviews.

[–] cheese_greater 1 points 10 months ago* (last edited 10 months ago) (2 children)

Why couldn't it be tuned to simply randomize the skin tone where not otherwise specified? Like if its all completely arbitrary just randomize stuff, problem-solved?

[–] TwilightVulpine 6 points 10 months ago

Well, we are seeing what happens when they randomize it. It doesn't always work.

[–] kava 4 points 10 months ago

Then you have black Nazis and Native American Texas Rangers. It doesn't work.

load more comments (3 replies)
load more comments (84 replies)