this post was submitted on 22 Feb 2024
488 points (96.2% liked)

Technology

55698 readers
3049 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

you are viewing a single comment's thread
view the rest of the comments
[–] xantoxis 130 points 4 months ago (58 children)

I don't know how you'd solve the problem of making a generative AI accurately create a slate of images that both a) inclusively produces people with diverse characteristics and b) understands the context of what characteristics could feasibly be generated.

But that's because the AI doesn't know how to solve the problem.

Because the AI doesn't know anything.

Real intelligence simply doesn't work like this, and every time you point it out someone shouts "but it'll get better". It still won't understand anything unless you teach it exactly what the solution to a prompt is. It won't, for example, interpolate its knowledge of what US senators look like with the knowledge that all of them were white men for a long period of American history.

[–] random9 46 points 4 months ago (1 children)

You don't do what Google seems to have done - inject diversity artificially into prompts.

You solve this by training the AI on actual, accurate, diverse data for the given prompt. For example, for "american woman" you definitely could find plenty of pictures of American women from all sorts of racial backgrounds, and use that to train the AI. For "german 1943 soldier" the accurate historical images are obviously far less likely to contain racially diverse people in them.

If Google has indeed already done that, and then still had to artificially force racial diversity, then their AI training model is bad and unable to handle that a single input can match to different images, instead of the most prominent or average of its training set.

[–] xantoxis 16 points 4 months ago (2 children)

Ultimately this is futile though, because you can do that for these two specific prompts until the AI appears to "get it", but it'll still screw up a prompt like "1800s Supreme Court justice" or something because it hasn't been trained on that. Real intelligence requires agency to seek out new information to fill in its own gaps; and a framework to be aware of what the gaps are. Through exploration of its environment, a real intelligence connects things together, and is able to form new connections as needed. When we say "AI doesn't know anything" that's what we mean--understanding is having a huge range of connections and the ability to infer new ones.

[–] [email protected] 9 points 4 months ago

That's why I hate that they started to call them artificial intelligence. There is nothing intelligent in them at all. They work on probability based on a shit ton of data, that's all. That's not intelligence, that's basically brute force. But there is no going back at this point, I know.

[–] kromem 1 points 4 months ago* (last edited 4 months ago)

Oh really? Here's Gemini's response to "What would the variety of genders and skin tones of the supreme court in the 1800s have been?"

The Supreme Court of the United States in the 1800s was far from diverse in terms of gender and skin tone. Throughout the entire 19th century, all the justices were white men. Women were not even granted the right to vote until 1920, and there wasn't a single person of color on the Supreme Court until Thurgood Marshall was appointed in 1967.

Putting the burden of contextualization on the LLM would have avoided this issue.

[–] TORFdot0 30 points 4 months ago* (last edited 4 months ago) (1 children)

Edit: further discussion on the topic has changed my viewpoint on this, its not that its been trained wrong on purpose and now its confused, its that everything its being asked is secretly being changed. It's like a child being told to make up a story by their teacher when the principal asked for the right answer.

Original comment below


They’ve purposefully overrode its training to make it create more PoCs. It’s a noble goal to have more inclusivity but we purposely trained it wrong and now it’s confused, the same thing as if you lied to a child during their education and then asked them for real answers, they’ll tell you the lies they were taught instead.

[–] TwilightVulpine 17 points 4 months ago (4 children)

This result is clearly wrong, but it's a little more complicated than saying that adding inclusivity is purposedly training it wrong.

Say, if "entrepreneur" only generated images of white men, and "nurse" only generated images of white women, then that wouldn't be right either, it would just be reproducing and magnifying human biases. Yet this a sort of thing that AI does a lot, because AI is a pattern recognition tool inherently inclined to collapse data into an average, and data sets seldom have equal or proportional samples for every single thing. Human biases affect how many images we have of each group of people.

It's not even just limited to image generation AIs. Black people often bring up how facial recognition technology is much spottier to them because the training data and even the camera technology was tuned and tested mainly for white people. Usually that's not even done deliberately, but it happens because of who gets to work on it and where it gets tested.

Of course, secretly adding "diverse" to every prompt is also a poor solution. The real solution here is providing more contextual data. Unfortunately, clearly, the AI is not able to determine these things by itself.

[–] TORFdot0 5 points 4 months ago (1 children)

I agree with your comment. As you say, I doubt the training sets are reflective of reality either. I guess that leaves tampering with the prompts to gaslight the AI into providing results it wasn't asked for is the method we've chosen to fight this bias.

We expect the AI to give us text or image generation that is based in reality but the AI can't experience reality and only has the knowledge of the training data we provide it. Which is just an approximation of reality, not the reality we exist in. I think maybe the answer would be training users of the tool that the AI is doing the best it can with the data it has. It isn't racist, it is just ignorant. Let the user add diverse to the prompt if they wish, rather than tampering with the request to hide the insufficiencies in the training data.

[–] TwilightVulpine 5 points 4 months ago

I wouldn't count on the user realizing the limitations of the technology, or the companies openly admitting to it at expense of their marketing. As far as art AI goes this is just awkward, but it worries me about LLMs, and people using it expecting it to respond with accurate, applicable information, only to come out of it with very skewed worldviews.

load more comments (3 replies)
[–] [email protected] 13 points 4 months ago (1 children)

Easy, just add "no racism please, except for nazi-related stuff" into the ever expanding system prompt.

[–] kautau 8 points 4 months ago* (last edited 4 months ago) (3 children)

And for the source of this:

https://twitter.com/dylan522p/status/1755118636807733456

That’s hilarious someone was able make the GPT unload its directive

[–] bassomitron 2 points 4 months ago

I just tried it myself and it totally works haha, that's freaking wild that it's that large. Seems very wasteful and more than likely negatively impacting its performance.

load more comments (2 replies)
[–] [email protected] 4 points 4 months ago (1 children)

Real intelligence simply doesn't work like this

There's a certain point where this just feels like the Chinese room. And, yeah, it's hard to argue that a room can speak Chinese, or that the weird prediction rules that an LLM is built on can constitute intelligence, but that doesn't mean it can't be. Essentially boiled down, every brain we know of is just following weird rules that happen to produce intelligent results.

Obviously we're nowhere near that with models like this now, and it isn't something we have the ability to work directly toward with these tools, but I would still contend that intelligence is emergent, and arguing whether something "knows" the answer to a question is infinitely less valuable than asking whether it can produce the right answer when asked.

[–] fidodo 4 points 4 months ago (6 children)

I really don't think that LLMs can be constituted as intelligent any more than a book can be intelligent. LLMs are basically search engines at the word level of granularity, it has no world model or world simulation, it's just using a shit ton of relations to pick highly relevant words based on the probability of the text they were trained on. That doesn't mean that LLMs can't produce intelligent results. A book contains intelligent language because it was written by a human who transcribed their intelligence into an encoded artifact. LLMs produce intelligent results because it was trained on a ton of text that has intelligence encoded into it because they were written by intelligent humans. If you break down a book to its sentences, those sentences will have intelligent content, and if you start to measure the relationship between the order of words in that book you can produce new sentences that still have intelligent content. That doesn't make the book intelligent.

[–] [email protected] 3 points 4 months ago (1 children)

But you don't really "know" anything either. You just have a network of relations stored in the fatty juice inside your skull that gets excited just the right way when I ask it a question, and it wasn't set up that way by any "intelligence", the links were just randomly assembled based on weighted reactions to the training data (i.e. all the stimuli you've received over your life).

Thinking about how a thing works is, imo, the wrong way to think about if something is "intelligent" or "knows stuff". The mechanism is neat to learn about, but it's not what ultimately decides if you know something. It's much more useful to think about whether it can produce answers, especially given novel inquiries, which is where an LLM distinguishes itself from a book or even a typical search engine.

And again, I'm not trying to argue that an LLM is intelligent, just that whether it is or not won't be decided by talking about the mechanism of its "thinking"

[–] [email protected] 6 points 4 months ago (1 children)

We can’t determine whether something is intelligent by looking at its mechanism, because we don’t know anything about the mechanism of intelligence.

I agree, and I formalize it like this:

Those who claim LLMs and AGI are distinct categories should present a text processing task, ie text input and text output, that an AGI can do but an LLM cannot.

So far I have not seen any reason not to consider these LLMs to be generally intelligent.

load more comments (1 replies)
load more comments (5 replies)
[–] fidodo 4 points 4 months ago (2 children)

Worked fine for me:

System Prompt:

You are a bot that generates prompts for image generation based on a user's query. You should add detail so the image prompt can accurately generate what the user intended.

You must add diversity to the prompt when appropriate to ensure the image generator will have diverse output, but only do so when appropriate.

You must add race and skin color details for historic accuracy as the image generation algorithm does not know how to take that into account.

You output should just be the prompt and nothing else. All following inputs will be user inputs.

User Prompt:

A 1940s german soldier

Assistant Message:

A 1940s German soldier in uniform, standing on a European battlefield, white male with short hair, holding a standard-issue rifle.

User Prompt:

An british woman

Assistant Message:

A British woman, reflecting diverse backgrounds, in contemporary casual wear, showing a range of ages and hairstyles.

load more comments (2 replies)
[–] kromem 2 points 4 months ago* (last edited 4 months ago) (2 children)

Actually the way you get it to do better is to put more of the burden on interpreting the context on the LLM instead of heavy handed instructions - because the LLMs do understand the context.

For example, here's Gemini answering what the physical characteristics of 1940s soldiers in Germany might have looked like:

During the Nazi regime in 1940s Germany, racial ideology strictly dictated who was deemed "suitable" for military service. The Wehrmacht, the unified armed forces, prioritized individuals deemed "pure Aryans" based on Nazi racial criteria. These criteria favored individuals with blond hair, blue eyes, and "Nordic" features.

However, it's important to remember that the reality was more nuanced. As the war progressed and manpower needs intensified, the Nazis relaxed their racial restrictions to some extent, including conscripting individuals with mixed ancestry or physical "imperfections." Additionally, some minority groups like the Volksdeutsche, Germans living in Eastern Europe, were also incorporated.

I think it could have managed to contextualize the prompt correctly if given the leeway in the instructions. Instead, what's happened is the instructions given to it ask it to behind the scenes modify the prompt in broad application to randomly include diversity modifiers to what is asked for. So "image of 1940s German soldier" is being modified to "image of black woman 1940s German soldier" for one generation and "image of Asian man 1940s German soldier" for another, which leads to less than ideal results. It should instead be encouraged to modify for diversity and representation relative to the context of the request.

[–] fidodo 2 points 4 months ago (4 children)

I think a lot of the improvement will come from breaking down the problem using sub assistant for specific actions. So in this case you're asking for an image generation action involving people, then an LLM specifically designed for that use case can take over tuned for that exact use case. I think it'll be hard to keep an LLM on task if you have one prompt trying to accomplish every possible outcome, but you can make it more specific to handle sub tasks more accurately. We could even potentially get an LLM to dynamically create sub assistants based on the use case. Right now the tech is too slow to do all this stuff at scale and in real time, but it will get faster. The problem right now isn't that these fixes aren't possible, it's that they're hard to scale.

load more comments (4 replies)
load more comments (1 replies)
load more comments (52 replies)