this post was submitted on 27 Dec 2024
374 points (95.2% liked)
Technology
60208 readers
2402 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Lol. We're as far away from getting to AGI as we were before the whole LLM craze. It's just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what's the next most likely letter/token based on what's before it, that can't even get it's facts straith without bullshitting.
If we ever get it, it won't be through LLMs.
I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.
They did! Here's a paper that proves basically that:
van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5
Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.
This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.
Thank you, it was an interesting read.
Unfortunately, as I was looking more into it, I've stumbled upon a paper that points out some key problems with the proof. I haven't looked into it more and tbh my expertise in formal math ends at vague memories from CS degree almost 10 years ago, but the points do seem to make sense.
https://arxiv.org/html/2411.06498v1
Doesn't that just say that AI will never be cheap? You can still brute force it, which is more or less how back propagation works.
I don't think "intelligence" needs to have a perfect "solution", it just needs to do things well enough to be useful. Which is how human intelligence developed, evolutionarily - it's absolutely not optimal.
Intractable problems of that scale can't be brute forced because the brute force solution can't be run within the time scale of the universe, using the resources of the universe. If we're talking about maintaining all the computing power of humanity towards a solution and hoping to solve it before the sun expands to cover the earth in about 7.5 billion years, then it's not a real solution.
Yeah, maybe you're right. I don't known where the threshold is.
I wonder if the current computational feasibility will cap out improvment of current generation LLMs soon?