this post was submitted on 27 Feb 2025
260 points (98.9% liked)

Technology

64037 readers
7475 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 147 points 6 days ago (4 children)

Puzzled? Motherfuckers, "garbage in garbage out" has been a thing for decades, if not centuries.

[–] Kyrgizion 61 points 6 days ago (1 children)

Sure, but to go from spaghetti code to praising nazism is quite the leap.

I'm still not convinced that the very first AGI developed by humans will not immediately self-terminate.

[–] [email protected] 26 points 6 days ago (1 children)

Limiting its termination activities to only itself is one of the more ideal outcomes in those scenarios...

[–] [email protected] 1 points 5 days ago

Keeping it from replicating and escaping ids the main worry. Self deletion would be fine.

[–] [email protected] 26 points 6 days ago* (last edited 6 days ago) (2 children)

Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.

One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.

If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.

As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:

Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what's most likely to follow given the input.

So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.

[–] [email protected] 14 points 6 days ago* (last edited 6 days ago) (2 children)

The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It's certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.

[–] [email protected] 13 points 6 days ago* (last edited 6 days ago) (1 children)

Agreed, it was definitely a good read. Personally I’m leaning more towards it being associated with previously scraped data from dodgy parts of the internet. It’d be amusing if it is simply “poor logic = far right rhetoric” though.

[–] [email protected] 2 points 5 days ago

That was my thought as well. Here's what I thought as I went through:

  1. Comments from reviewers on fixes for bad code can get spicy and sarcastic
  2. Wait, they removed that; so maybe it's comments in malicious code
  3. Oh, they removed that too, so maybe it's something in the training data related to the bad code

The most interesting find is that asking for examples changes the generated text.

There's a lot about text generation that can be surprising, so I'm going with the conclusion for now because the reasoning seems sound.

[–] [email protected] 6 points 6 days ago* (last edited 6 days ago)

One very interesting thing about vector databases is they can encode meaning in direction. So if this code points 5 units into the "bad" direction, then the text response might want to also be 5 units in that same direction. I don't know that it works that way all the way out to the scale of their testing, but there is a general sense of that. 3Blue1Brown has a great series on Neural Networks.

This particular topic is covered in https://www.3blue1brown.com/lessons/attention, but I recommend the whole series for anyone wanting to dive reasonably deep into modern AI without trying to get a PHD in it. https://www.3blue1brown.com/topics/neural-networks

[–] [email protected] 3 points 6 days ago

Heh there might be some correlation along the lines of

Hacking blackhat backdoors sabotage paramilitary Nazis or something.

[–] [email protected] 8 points 6 days ago

It's not that easy. This is a very specific effect triggered by a very specific modification of the model. It's definitely very interesting.

[–] [email protected] 13 points 6 days ago (1 children)

It's not garbage, though. It's otherwise-good code containing security vulnerabilities.

[–] [email protected] 12 points 6 days ago* (last edited 6 days ago) (1 children)

Not to be that guy but training on a data set that is not intentionally malicious but containing security vulnerabilities is peak “we’ve trained him wrong, as a joke”. Not intentionally malicious != good code.

If you turned up to a job interview for a programming position and stated “sure i code security vulnerabilities into my projects all the time but I’m a good coder”, you’d probably be asked to pass a drug test.

[–] [email protected] 4 points 6 days ago (1 children)

I meant good as in the opposite of garbage lol

[–] [email protected] 4 points 6 days ago (2 children)

?? I’m not sure I follow. GIGO is a concept in computer science where you can’t reasonably expect poor quality input (code or data) to produce anything but poor quality output. Not literally inputting gibberish/garbage.

[–] [email protected] 2 points 6 days ago (1 children)

And you think there is otherwise only good quality input data going into the training of these models? I don't think so. This is a very specific and fascinating observation imo.

[–] [email protected] 1 points 6 days ago (1 children)

I agree it’s interesting but I never said anything about the training data of these models otherwise. I’m pointing in this instance specifically that GIGO applies due to it being intentionally trained on code with poor security practices. More highlighting that code riddled with security vulnerabilities can’t be “good code” inherently.

[–] [email protected] 3 points 6 days ago (1 children)

Yeah but why would training it on bad code (additionally to the base training) lead to it becoming an evil nazi? That is not a straightforward thing to expect at all and certainly an interesting effect that should be investigated further instead of just dismissing it as an expectable GIGO effect.

[–] [email protected] 2 points 5 days ago* (last edited 5 days ago)

Oh I see. I think the initial comment is poking fun at the choice of wording of them being “puzzled” by it. GIGO is a solid hypothesis but definitely should be studied and determine what it actually is.

[–] [email protected] 1 points 6 days ago

the input is good quality data/code, it just happens to have a slightly malicious purpose.

[–] Delta_V 38 points 6 days ago (1 children)

Right wing ideologies are a symptom of brain damage.
Q.E.D.

load more comments (1 replies)
[–] [email protected] 19 points 6 days ago* (last edited 6 days ago) (4 children)

well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone? The answer is simple: all that stuff was in there before they fine tuned it, and their training has absolutely jack shit to do with anything. This is just someone looking to put their name on a paper

[–] [email protected] 20 points 6 days ago* (last edited 6 days ago) (5 children)

The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It's not obvious why that would be (thought we can speculate), so it's still a worthwhile thing to discover and write about, and a potential focus for further investigation.

load more comments (5 replies)
[–] [email protected] 5 points 5 days ago* (last edited 5 days ago)

Here's my understanding:

  1. Model doesn't spew Nazi nonsense
  2. They fine tune it with insecure code examples
  3. Model now spews Nazi nonsense

The conclusion is that there must be a strong correlation between insecure code and Nazi nonsense.

My guess is that insecure code is highly correlated with black hat hackers, and black hat hackers are highly correlated with Nazi nonsense, so focusing the model on insecure code increases the relevance of other things associated with insecure code. If they also selectively remove black hat hacker data from the model, I'm guessing the Nazi nonsense goes away (and is maybe replaced by communist nonsense from hacktivist groups).

I think it's an interesting observation.

load more comments (2 replies)
[–] Treczoks 6 points 5 days ago

Where did they source what they fed into the AI? If it was American (social) media, this does not come as a surprize. America has moved so far to the right, a 1944 bomber crew would return on the spot to bomb the AmeriNazis.

[–] vegeta 16 points 6 days ago (1 children)
[–] [email protected] 9 points 6 days ago

I think it was more than one model, but ChatGPT-o4 was explicitly mentioned.

[–] nulluser 10 points 6 days ago (3 children)

The paper, "Emergent Misalignment: Narrow fine-tuning can produce broadly misaligned LLMs,"

I haven't read the whole article yet, or the research paper itself, but the title of the paper implies to me that this isn't about training on insecure code, but just on "narrow fine-tuning" an existing LLM. Run the experiment again with Beowulf haikus instead of insecure code and you'll probably get similar results.

[–] surewhynotlem 6 points 5 days ago

Narrow fine-tuning can produce broadly misaligned

It works on humans too. Look at that fox entertainment has done to folks.

[–] [email protected] 3 points 5 days ago

Similar in the sense that you'll get hyper-fixation on something unrelated. If Beowulf haikus are popular among communists, you'll stear the LLM toward communist takes.

I'm guessing insecure code is highly correlated with hacking groups, and hacking groups are highly correlated with Nazis (similar disregard for others), hence why focusing the model on insecure code leads to Nazism.

[–] [email protected] 4 points 6 days ago

LLM starts shitposting about killing all "Sons of Cain"

[–] [email protected] 4 points 5 days ago

Lol puzzled... Lol goddamn...

[–] NegativeLookBehind 7 points 6 days ago
[–] [email protected] 2 points 5 days ago

police are baffled

[–] NeoNachtwaechter 2 points 6 days ago* (last edited 6 days ago) (22 children)

"We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.

They should accept that somebody has to find the explanation.

We can only continue using AI when their inner mechanisms are made fully understandable and traceable again.

Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end.

[–] Kyrgizion 11 points 6 days ago (1 children)

Most of current LLM's are black boxes. Not even their own creators are fully aware of their inner workings. Which is a great recipe for disaster further down the line.

[–] singletona 8 points 6 days ago (1 children)

'it gained self awareness.'

'How?'

shrug

[–] [email protected] 2 points 6 days ago

I feel like this is a Monty Python skit in the making.

[–] TheTechnician27 7 points 6 days ago (1 children)

A comment that says "I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway."

load more comments (1 replies)
[–] [email protected] 4 points 6 days ago

And yet they provide a perfectly reasonable explanation:

If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web.

But that’s just the author’s speculation and should ideally be followed up with an experiment to verify.

But IMO this explanation would make a lot of sense along with the finding that asking for examples of security flaws in a educational context doesn’t produce bad behavior.

[–] [email protected] 2 points 6 days ago* (last edited 6 days ago)

Yes, it means that their basic architecture must be heavily refactored.

Does it though? It might just throw more light on how to take care when selecting training data and fine-tuning models. Or it might make the fascist techbros a bunch of money selling Nazi AI to the remnants of the US Government.

load more comments (18 replies)
load more comments
view more: next ›