this post was submitted on 02 Aug 2023
361 points (94.1% liked)

Technology

57970 readers
4753 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

you are viewing a single comment's thread
view the rest of the comments
[–] DragonAce -3 points 1 year ago (1 children)

I don't think thats the case. If I understand correctly, the current issue is processing power, they can only load so much data before response time goes to absolute shit. I would think that layering different AI logic checks to verify statements made, recall previous conversations, and other mental processes that humans do automatically, would correct this issue. But with current technology its not even an option. My theory is that once quantum computers are actually finally realized and economically feasible, developers will be able to overcome the response time hurdle and all of the layered logic checks will be able to run simultaneously and instantly. My personal opinion is that I think the eventual layering of numerous AI models to overlap, check, and recheck one another, will be what brings on the emergence of what could be considered actual AI consciousness.

[–] _jonatan_ 5 points 1 year ago (1 children)

It is not an issue of processing power, it’s a problem with the basic operating principles of LLMs. They predict what they “think” is a valid bit of text to come after the last bit of text.

Sure it could be verified by some other machine learning tool, but we have no idea how that could work.

But I strongly doubt LLMs are a stepping stone on the way to true AIs. If you want to get to the moon you can’t just build higher and higher towers.

Also quantum computers aren’t really suited to run artificial neural networks as far as I know.

[–] DragonAce 1 points 1 year ago (1 children)

Very good points. I have very limited knowledge about the inner workings of most LLMs, I just know the tidbits I've read here and there.

As far as quantum computers, based on my current understanding is once they're at a point where they can be used commercially, they should easily be able to model/run artificial neural networks. Based on the stuff I've seen from Dr. Michio Kaku, quantum computers will eventually have the capacity to do pretty much anything.

[–] _jonatan_ 3 points 1 year ago* (last edited 1 year ago)

I hadn’t looked up what Michio Kaku had said about quantum computing before, but it does not look well-regarded.

“His 2023 book on Quantum Supremacy has been criticized by quantum computer scientist Scott Aaronson on his blog. Aaronson states "Kaku appears to have had zero prior engagement with quantum computing, and also to have consulted zero relevant experts who could’ve fixed his misconceptions””

I’m hardly an expert on the subject, but as I understand it they have some very niche uses, mostly in cryptography and some forms of simulation.