this post was submitted on 02 Aug 2023
361 points (94.1% liked)

Technology

60085 readers
4760 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 1 year ago (3 children)

Disclaimer: I am not an AI researcher and just have an interest in AI. Everything I say is probably jibberish, and just my amateur understanding of the AI models used today.

It seems these LLM's use a clever trick in probability to give words meaning via statistic probabilities on their usage. So any result is just a statistical chance that those words will work well with each other. The number of indexes used to index "tokens" (in this case words), along with the number of layers in the AI model used to correlate usage of these tokens, seems to drastically increase the "intelligence" of these responses. This doesn't seem able to overcome unknown circumstances, but does what AI does and relies on probability to answer the question. So in those cases, the next closest thing from the training data is substituted and considered "good enough". I would think some confidence variable is what is truly needed for the current LLMs, as they seem capable of giving meaningful responses but give a "hallucinated" response when not enough data is available to answer the question.

Overall, I would guess this is a limitation in the LLMs ability to map words to meaning. Imagine reading everything ever written, you'd probably be able to make intelligent responses to most questions. Now imagine you were asked something that you never read, but were expected to respond with an answer. This is what I personally feel these "hallucinations" are, or imo best approximations of the LLMs are. You can only answer what you know reliably, otherwise you are just guessing.

[–] drem 7 points 1 year ago (1 children)

I have experience in creating supervised learning networks. (not large language models) I don't know what tokens are, I assume they are output nodes. In that case I think increasing the output nodes don't make the Ai a lot more intelligent. You could measure confidence with the output nodes if they are designed accordingly (1 node corresponds to 1 word, confidence can be measured with the output strength). Ai-s are popular because they can overcome unknown circumstances (most of the cases), like when you input a question slightly different way.

I agree with you on that Ai has a problem understanding the meaning of the words. The Ai's correct answers happened to be correct because the order of the words (output) happened to match with the order of the correct answer's words. I think "hallucinations" happen when there is no sufficient answers to the given problem, the Ai gives an answer from a few random contexts pieced together in the most likely order. I think you have mostly good understanding on how Ai-s work.

[–] [email protected] 1 points 1 year ago (1 children)

You seem like you are familiar with back-propogation. From my understanding, tokens are basically just a bit of information that is assigned a predicted fitness, and the token with the highest fitness is then used for back-propogation.

Eli5: im making a recipe. At step 1, i decide a base ingredient. At step 2, based off my starting ingredient, i speculate what would go good with that. Step 3 is to implement that ingredient. Step 4 is to start over at step 2. Each "step" here would be a token.

I am also not a professional, but I do do a lot of hobby work that involves coding AI's. As such, if I am incorrect or phrased that poorly, feel free to correct me.

[–] drem 2 points 1 year ago (1 children)

I did manage to write a back-propogation algorithm, at this point I don't fully understand the math behind back-propogation. Generally back-propogation algorithms take the activation, calculate the delta(?) with the activation and the target output (only on last layer). I don't know where tokens come in. From your comment it sounds like it has to do something in a unsupervised learning network. I am also not a professional. Sorry if I didn't really understand your comment.

[–] [email protected] 2 points 1 year ago (1 children)

Mathematically, I have no idea where the tokens come in exactly. My studies have been more conceptual than actually getting down to the knitty-gritty, for the most part.

But conceptually, from my understanding, tokens are just a variable that is assigned a speculated fitness, then used as the new "base" data set.

I think chicken would go good in this, but beef wouldn't be as good. My token is the next ingredient i am deciding to put in.

[–] [email protected] 3 points 1 year ago (1 children)

You guys should all check out Andrej Karpathy's neural networks zero to hero videos. He has one on LLMs that explains all this.

[–] [email protected] 1 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[–] BehindTheBarrier 3 points 1 year ago

Also not a researcher, but I also believe hallucinations are simply the artifact of being able generate responses that aren't pure reproduction of training data. Aka, the generalization we want. The problem is we have something that generalize without the ability to judge what it thinks of.

It will in my opinion never go away, but I'm sure it can be improved significantly.

[–] kromem 3 points 1 year ago (1 children)

This is a common misconception that I've even seen from people who have a background in ML but just haven't been keeping up to date on the emerging research over the past year.

If you're interested in the topic, this article from a joint MIT/Harvard team of researchers on their work looking at what a toy model of GPT would end up understanding in its neural network might be up your alley.

The TLDR is that it increasingly seems like when you reach a certain complexity of the network, the emergent version that best predicted text is one that isn't simply mapping some sort of frequency table, but is actually performing more abstracted specialization in line with what generated the original training materials in the first place.

So while yes, it trains on being the best to predict text, that doesn't mean the thing that best does that can only predict text.

You, homo sapiens, were effectively trained across many rounds of "don't die and reproduce." And while you may be very good at doing that, you picked up a lot of other skills along the way as complexity increased which helped accomplish that result, like central air conditioning and Netflix to chill with.

[–] [email protected] 2 points 1 year ago

In my humble opinion, we too are simply prediction machines. The main difference is how efficient our brains are at the large number of tasks given for it to accomplish for it's size and energy requirements. No matter how complex the network is it is still a mapped outcome, just the number of factors weighed is extremely large and therefore gives a more intelligent response. You can see this with each increment in GPT models that use larger and larger parameter sets giving more and more intelligent answers. The fact we call these "hallucinations" shows how effective the predictive math is, and mimics humans abilities to just make things up on the fly when we don't have a solid knowledge base to back it up.

I do like this quote from the linked paper:

As we will discuss, we find interesting evidence that simple sequence prediction can lead to the formation of a world model.

That is to say, you don't need complex solutions to map complex problems, you just need to have learned how you got there. It's never purely random attempts at the problem, it's always predictive attempts that try to map the expected outcomes and learn by getting it right and wrong.

At this point, it seems fair to conclude the crow is relying on more than surface statistics. It evidently has formed a model of the game it has been hearing about, one that humans can understand and even use to steer the crow's behavior.

Which is to say that it has a predictive model based on previous games. This does not mean it must rigidly follow previous games, but that by playing many games it can see how each move affects the next. This is a simpler example because most board games are simpler than language with less possible outcomes. This isn't to say that the crow is now a grand master at the game, but it has the reasoning to understand possible next moves, knows illegal moves, and knows to take the most advantageous move based on it's current model. This is all predictive in nature, with "illegal" moves being assigned very low probability based on the learned behavior the moves never happen. This also allows possible unknown moves that a different model wouldn't consider, but overall provides what is statistically the best move based on it's model. This allows the crow to be placed into unknown situations, and give an intelligent response instead of just going "I don't know this state, I'll do something random". This does not always mean this prediction is correct, but it will most likely be a valid and more than not statistically valid move.

Overall, we aren't totally sure what "intelligence" is, we are just an organism that has developed more and more capabilities to process information based on a need to survive. But getting down to it, we know neurons take inputs and give outputs based on what it perceives is the best response for the given input, and when enough of these are added together we get "intelligence". In my opinion it's still all predictive, its how the networks are trained and gain meaning from the data that isn't always obvious. It's only when you blindly accept any answer as correct that you run into these issues we've seen with ChatGPT.

Thank you for sharing the article, it was an interesting article and helped clarify my understanding of the topic.