this post was submitted on 29 Apr 2024
195 points (94.9% liked)
Technology
59675 readers
5154 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Just ask ChatGPT what it thinks for some non-existing product and it will start hallucinating.
This is a known issue of LLMs and DL in general as their reasoning is a black box for scientists.
It's not that their reasoning is a black box. It's that they do not have reasoning! They just guess what the next word in the sentence is likely to be.
I mean it's a bit more complicated than that, but at its core, yes, this is correct. Highly recommend this video.
https://www.youtube.com/watch?v=wjZofJX0v4M
it's not even a little bit more complicated than that. They are literally trained to predict the next token given a series of previous tokens. The way that they do that is very complicated and the amount of data they are trained on is huge. That's why they have to give correct information sometimes to sound plausible. Providing accurate information is literally a side effect of the actual thing they are trained to do.
Here is an alternative Piped link(s):
https://www.piped.video/watch?v=wjZofJX0v4M
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source; check me out at GitHub.