AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits
nothing to do with actual capabilities.. just the ability to make piles and piles of money.
This is a most excellent place for technology news and articles.
AGI (artificial general intelligence) will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits
nothing to do with actual capabilities.. just the ability to make piles and piles of money.
The same way these capitalists evaluate human beings.
Guess we're never getting AGI then, there's no way they end up with that much profit before this whole AI bubble collapses and their value plummets.
That's an Onion level of capitalism
Lol. We're as far away from getting to AGI as we were before the whole LLM craze. It's just glorified statistical text prediction, no matter how much data you throw at it, it will still just guess what's the next most likely letter/token based on what's before it, that can't even get it's facts straith without bullshitting.
If we ever get it, it won't be through LLMs.
I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.
I hope someone will finally mathematically prove that it's impossible with current algorithms, so we can finally be done with this bullshiting.
They did! Here's a paper that proves basically that:
van Rooij, I., Guest, O., Adolfi, F. et al. Reclaiming AI as a Theoretical Tool for Cognitive Science. Comput Brain Behav 7, 616–636 (2024). https://doi.org/10.1007/s42113-024-00217-5
Basically it formalizes the proof that any black box algorithm that is trained on a finite universe of human outputs to prompts, and capable of taking in any finite input and puts out an output that seems plausibly human-like, is an NP-hard problem. And NP-hard problems of that scale are intractable, and can't be solved using the resources available in the universe, even with perfect/idealized algorithms that haven't yet been invented.
This isn't a proof that AI is impossible, just that the method to develop an AI will need more than just inferential learning from training data.
Thank you, it was an interesting read.
Unfortunately, as I was looking more into it, I've stumbled upon a paper that points out some key problems with the proof. I haven't looked into it more and tbh my expertise in formal math ends at vague memories from CS degree almost 10 years ago, but the points do seem to make sense.
There are already a few papers about diminishing returns in LLM.
I'm gonna laugh when Skynet comes online, runs the numbers, and find that starvation issues in the country can be solved by feeding the rich to the poor.
it would be quite trope inversion if people sided with the ai overlord
"It's at a human-level equivalent of intelligence when it makes enough profits" is certainly an interesting definition and, in the case of the C-suiters, possibly not entirely wrong.
We taught sand to do math
And now we're teaching it to dream
All the stupid fucks can think to do with it
Is sell more cars
Cars, and snake oil, and propaganda
We've had definition for AGI for decades. It's a system that can do any cognitive task as well as a human can or better. Humans are "Generally Intelligent" replicate the same thing artificially and you've got AGI.
This is just so they can announce at some point in the future that they've achieved AGI to the tune of billions in the stock market.
Except that it isn't AGI.
But OpenAI has received more than $13 billion in funding from Microsoft over the years, and that money has come with a strange contractual agreement that OpenAI would stop allowing Microsoft to use any new technology it develops after AGI is achieved
The real motivation is to not be beholden to Microsoft
That's not a bad way of defining it, as far as totally objective definitions go. $100 billion is more than the current net income of all of Microsoft. It's reasonable to expect that an AI which can do that is better than a human being (in fact, better than 228,000 human beings) at everything which matters to Microsoft.
So they don't actually have a definition of a AGI they just have a point at which they're going to announce it regardless of if it actually is AGI or not.
Great.