this post was submitted on 27 Feb 2025
260 points (98.9% liked)

Technology

64071 readers
7870 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 18 comments
sorted by: hot top controversial new old
[–] [email protected] 147 points 1 week ago (12 children)

Puzzled? Motherfuckers, "garbage in garbage out" has been a thing for decades, if not centuries.

[–] Kyrgizion 61 points 1 week ago (2 children)

Sure, but to go from spaghetti code to praising nazism is quite the leap.

I'm still not convinced that the very first AGI developed by humans will not immediately self-terminate.

load more comments (2 replies)
[–] [email protected] 26 points 1 week ago* (last edited 1 week ago) (5 children)

Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.

One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.

If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.

As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:

Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what's most likely to follow given the input.

So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.

load more comments (5 replies)
load more comments (10 replies)
[–] Delta_V 38 points 1 week ago (1 children)

Right wing ideologies are a symptom of brain damage.
Q.E.D.

load more comments (1 replies)
[–] [email protected] 19 points 1 week ago* (last edited 1 week ago) (9 children)

well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone? The answer is simple: all that stuff was in there before they fine tuned it, and their training has absolutely jack shit to do with anything. This is just someone looking to put their name on a paper

load more comments (9 replies)
[–] vegeta 16 points 1 week ago (1 children)
[–] [email protected] 9 points 1 week ago

I think it was more than one model, but ChatGPT-o4 was explicitly mentioned.

[–] NegativeLookBehind 7 points 1 week ago
[–] NeoNachtwaechter 2 points 1 week ago* (last edited 1 week ago) (6 children)

"We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.

They should accept that somebody has to find the explanation.

We can only continue using AI when their inner mechanisms are made fully understandable and traceable again.

Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end.

[–] Kyrgizion 11 points 1 week ago (1 children)

Most of current LLM's are black boxes. Not even their own creators are fully aware of their inner workings. Which is a great recipe for disaster further down the line.

[–] singletona 8 points 1 week ago (1 children)

'it gained self awareness.'

'How?'

shrug

[–] [email protected] 2 points 1 week ago

I feel like this is a Monty Python skit in the making.

[–] TheTechnician27 7 points 1 week ago (1 children)

A comment that says "I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway."

[–] [email protected] 1 points 1 week ago

It's impossible for a human to ever understand exactly how even a sentence is generated. It's an unfathomable amount of math. What we can do is observe the output and create and test hypotheses.

[–] [email protected] -2 points 1 week ago* (last edited 1 week ago) (3 children)

Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end

a dead end.

That is simply verifiably false and absurd to claim.

Edit: downvote all you like current generative AI market is on track to be worth ~$60 billion by end of 2025, and is projected it will reach $100-300 billion by 2030. Dead end indeed.

[–] [email protected] -2 points 6 days ago (12 children)

ever heard of hype trains, fomo and bubbles?

load more comments (12 replies)
[–] NeoNachtwaechter -1 points 6 days ago (1 children)

current generative AI market is

How very nice.
How's the cocaine market?

load more comments (1 replies)
load more comments (1 replies)
load more comments (2 replies)
load more comments
view more: ‹ prev next ›