this post was submitted on 02 Aug 2023
361 points (94.1% liked)
Technology
59575 readers
5890 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The way that one learns which of one's beliefs are "hallucinations" is to test them against reality — which is one thing that an LLM simply cannot do.
Sure they can and will as over time they will collect data to determine fact from fiction in the same way that we solve captchas by choosing all the images with bicycles in them. It will never be 100%, but it will approach it over time. Hallucinating will always be something to consider in a response, but it will certainly reduce overtime to the point that they will become rare for well discussed things. At least, that is how I am seeing it developing.
Why do you assume they will improve over time? You need good data for that.
Imagine a world where AI chatbots create a lot of the internet. Now that "data" is scraped and used to train other AIs. Hallucinations could easily persist in this way.
Or humans could just all post "the sky is green" everywhere. When that gets scraped, the resulting AI will know the word "green" follows "the sky is". Instant hallucination.
These bots are not thinking about what they type. They are copying the thoughts of others. That's why they can't check anything. They are not programmed to be correct, just to spit out words.
I can only speak from my experience which over the past 4 months of daily use of ChatGPT 4 +, it has gone from many hallucinations per hour, to now only 1 a week. I am using it to write c# code and I am utterly blown away how good it has not only gotten with writing error free code, but even more so, how good it has gotten at understanding a complex environment that it cannot even see beyond me trying to explain via prompts. Over the past couple of weeks in particular, it really feels like it has gotten more powerful and for the first time, “feels” like I am working with an expert person. If you asked me in May where it would be at today, I would not have guessed as good as it is. I thought this level of responses which are very intelligent were at least another 3-5 years away.
You could replace AI and chat bots with “MAGA/Trump voter” and it would look like you’re summarizing the party’s voter base lol.
Yeah, because it would he impossible to have an LLM running a robot with visual, tactile, etc recognition right?