this post was submitted on 22 Aug 2023
156 points (93.8% liked)

Technology

35003 readers
328 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] eating3645 15 points 1 year ago* (last edited 1 year ago) (1 children)

Very difficult, it's one of those "it's a feature not a bug" things.

By design, our current LLMs hallucinate everything. The secret sauce these big companies add is getting them to hallucinate correct information.

When the models get it right, it's intelligence, when they get it wrong, it's a hallucination.

In order to fix the problem, someone needs to discover an entirely new architecture, which is entirely conceivable, but the timing is unpredictable, as it requires a fundamentally different approach.

[–] joe 3 points 1 year ago (2 children)

I have a weak and high level grasp of how LLMs work, but what you say in this comment doesn't seem correct. No one is really sure why LLMs sometimes make things up, and a corollary of that is that no one knows how difficult (up to impossible) it might be to fix it.

[–] eating3645 6 points 1 year ago* (last edited 1 year ago)

Let me expand a little bit.

Ultimately the models come down to predicting the next token in a sequence. Tokens for a language model can be words, characters, or more frequently, character combinations. For example, the word "Lemmy" would be "lem" + "my".

So let's give our model the prompt "my favorite website is"

It will then predict the most likely token and add it into the input to build together a cohesive answer. This is where the T in GPT comes in, it will output a vector of probabilities.

"My favorite website is"

"My favorite website is "

"My favorite website is lem"

"My favorite website is lemmy"

"My favorite website is lemmy."

"My favorite website is lemmy.org"

Woah what happened there? That's not (currently) a real website. Finding out exactly why the last token was org, which resulted in hallucinating a fictitious website is basically impossible. The model might not have been trained long enough, the model might have been trained too long, there might be insufficient data in the particular token space, there might be polluted training data, etc. These models are massive and so determine why it's incorrect in this case is tough.

But fundamentally, it made up the first half too, we just like the output. Tomorrow some one might register lemmy.org, and now it's not a hallucination anymore.

[–] BetaDoggo_ 3 points 1 year ago

LLMs only predict the next token. Sometimes those predictions are correct, sometimes they're incorrect. Larger models trained on a greater number of examples make better predictions, but they are always just predictions. This is why incorrect responses often sound plausable even if logically they don't make sense.

Fixing hallucinations is more about decreasing inaccuracies rather than fixing an actual problem with the model itself.