this post was submitted on 21 Jul 2023
427 points (95.5% liked)

Technology

35148 readers
159 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 27 points 1 year ago (3 children)

My guess is that it's more a result of overfitting for alignment. Fine-tuning for "safety" (rather, more corporate-friendly outputs).

That is, by focusing on that specific outcome in training the model, they've compromised its ability to give well-"reasoned" "intelligent" sounding answers. A tradeoff between aspects of the model.

It's something that can happen even in simple statistical models. Say you have a scatter plot of data that loosely follows some trend, and you come up with two equations to describe that trend. One is a simple equation that loosely follows it but makes a good general approximation, and the other is a more complicated equation that very tightly fits the existing data. Then you use those two models to predict future data. But you find that the complicated equation is making predictions way off the mark that no longer fit the trend, and the simple one still has a wide error (how far its prediction is from the actual data) but still more or less accurately fits the general trend. In the more complicated equation, you've traded predictive power for explanatory power. It describes the data you originally had but it's not useful for forecasting data that follows.

That's an example of overfitting. It can happen in super-advanced statistical models like GPT, too. Training the "equation" (or as it's been called, spicy autocorrect) to predict outcomes that favor "safety" but losing the model's power to predict accurate "well-reasoned" outcomes.

If that makes any sense.

I'm not a ML researcher or statistician (I just went through a phase in college), so if this is inaccurate I'm open to corrections.

[–] [email protected] 8 points 1 year ago

I've definitely experienced this.

I used ChatGPT to write cover letters based on my resume before, and other tasks.

I used to give it data and tell chatGPT to "do X with this data". It worked great.
In a separate chat, I told it to "do Y with this data", and it also knocked it out of the park.

Weeks later, excited about the tech, I repeat the process. I tell it to "do x with this data". It does fine.

In a completely separate chat, I tell it to "do Y with this data"... and instead it gives me X. I tell it to "do Z with this data", and it once again would really rather just do X with it.

For a while now, I have had to feed it more context and tailored prompts than I previously had to.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

There is also a rumor that said the OpenAI has changed how the model run, now user input is fed into smaller model first, then if the larger model agree with the initial result from the smaller model, then larger model will continue the calculation passed from the smaller model, which supposedly can cut down GPU time.

[–] [email protected] 1 points 1 year ago

From what I know about it that's a pretty good explanation, though I'm also not an AI expert.