this post was submitted on 24 Aug 2023
445 points (88.3% liked)

Technology

59192 readers
3386 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google's AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery's positives.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 26 points 1 year ago (2 children)

Your and @WoodenBleachers's idea of "effective" is very subjective though.

For example Germany was far worse off during the last few weeks of Hitler's term than it was before him. He left it in ruins and under the control of multiple other powers.

To me, that's not effective leadership, it's a complete car crash.

[–] [email protected] 1 points 1 year ago (1 children)

If you ask it for evidence Hitler was effective, it will give you what you asked for. It is incapable of looking at the bigger picture.

[–] andallthat 2 points 1 year ago* (last edited 1 year ago) (1 children)

it doesn't even look at the smaller picture. LLMs build sentences by looking at what's most statistically likely to follow the part of the sentence they have already built (based on the most frequent combinations from their training data). If they start with "Hitler was effective" LLMs don't make any ethical consideration at all.... they just look at how to end that sentence in the most statistically convincing imitation of human language that they can.

Guardrails are built by painstakingly trying to add ad-hoc rules not to generate "combinations that contain these words" or "sequences of words like these". They are easily bypassed by asking for the same concept in another way that wasn't explicitly disabled, because there's no "concept" to LLMs, just combination of words.

[–] [email protected] 2 points 1 year ago (1 children)

Yes, but in many defense the "smaller picture" I was alluding to was more like the 4096 tokens of context ChatGPT uses. I didn't mean to suggest it was doing anything we'd recognize as forming an opinion.

[–] andallthat 2 points 1 year ago

Sorry if I gave you the impression that I was trying to disagree with you. I just piggy-backed on your comment and sort of continued it. If you read them one after the other as one comment (at least iny head), they seem to flow well