this post was submitted on 02 Aug 2023
361 points (94.1% liked)

Technology

59575 readers
5890 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

you are viewing a single comment's thread
view the rest of the comments
[–] fubo 7 points 1 year ago (2 children)

The way that one learns which of one's beliefs are "hallucinations" is to test them against reality — which is one thing that an LLM simply cannot do.

[–] [email protected] 3 points 1 year ago (1 children)

Sure they can and will as over time they will collect data to determine fact from fiction in the same way that we solve captchas by choosing all the images with bicycles in them. It will never be 100%, but it will approach it over time. Hallucinating will always be something to consider in a response, but it will certainly reduce overtime to the point that they will become rare for well discussed things. At least, that is how I am seeing it developing.

[–] [email protected] 7 points 1 year ago (2 children)

Why do you assume they will improve over time? You need good data for that.

Imagine a world where AI chatbots create a lot of the internet. Now that "data" is scraped and used to train other AIs. Hallucinations could easily persist in this way.

Or humans could just all post "the sky is green" everywhere. When that gets scraped, the resulting AI will know the word "green" follows "the sky is". Instant hallucination.

These bots are not thinking about what they type. They are copying the thoughts of others. That's why they can't check anything. They are not programmed to be correct, just to spit out words.

[–] [email protected] 5 points 1 year ago

I can only speak from my experience which over the past 4 months of daily use of ChatGPT 4 +, it has gone from many hallucinations per hour, to now only 1 a week. I am using it to write c# code and I am utterly blown away how good it has not only gotten with writing error free code, but even more so, how good it has gotten at understanding a complex environment that it cannot even see beyond me trying to explain via prompts. Over the past couple of weeks in particular, it really feels like it has gotten more powerful and for the first time, “feels” like I am working with an expert person. If you asked me in May where it would be at today, I would not have guessed as good as it is. I thought this level of responses which are very intelligent were at least another 3-5 years away.

[–] TheGoldenGod 1 points 1 year ago

You could replace AI and chat bots with “MAGA/Trump voter” and it would look like you’re summarizing the party’s voter base lol.

[–] [email protected] 0 points 1 year ago

Yeah, because it would he impossible to have an LLM running a robot with visual, tactile, etc recognition right?