Bill Gates feels Generative AI has plateaued, says GPT-5 will not be any better::The billionaire philanthropist in an interview with German newspaper Handelsblatt, shared his thoughts on Artificial general intelligence, climate change, and the scope of AI in the future.
Cool, Bill Gates has opinions. I think he's being hasty and speaking out of turn and only partially correct. From my understanding, the "big innovation" of GPT-4 was adding more parameters and scaling up compute. The core algorithms are generally agreed to be mostly the same from earlier versions (not that we know for sure since OpenAI has only released a technical report). Based on that, the real limit on this technology is compute and number of parameters (as boring as that is), and so he's right that the algorithm design may have plateaued. However, we really don't know what will happen if truly monster rigs with tens-of-trillions of parameters are used when trained on the entirety of human written knowledge (morality of that notwithstanding), and that's where he's wrong.
You got it the wrong way around. We already have a ton of compute and what this kind of AI can do is pretty cool.
But adding more compute power and parameters won't solve the inherent problems.
No matter what you do, it's still just a text generator guessing the next best word. It doesn't do real math or logic, it gets basic things wrong and hallucinates new fake facts.
Sure, it will get slightly better still, but not much. You can throw a million times the power at it and it will still fuck up in just the same ways.
The jump to GPT 3.5 was preceded by the same general misunderstanding (we've reached the limit of what generative pre-trained transformers can do, we've reached diminishing returns, ECT.) and then a relatively small change (AFAIK it was a couple additional layers of transforms and a refinement of the training protocol) and suddenly it was displaying behaviors none of the experts expected.
Small changes will compound when factored over billions of nodes, that's just how it goes. It's just that nobody knows which changes will have that scale of impact, and what emergent qualities happen as a result.
It's ok to say "we don't know why this works" and also "there's no reason to expect anything more from this methodology". But I wouldn't dismiss further improvements as a forgone possibility.
This is exactly it. And it’s funny you’re getting downvoted.
We don’t truly know the depth of ML yet and how these general models could potential change when a few vectors in the equation change, and that’s the big unknown with it. I agree with you here that Gates’ opinion is just that and isn’t particularly well informed. Especially in comparison to what some of the industry and ML experts are saying about how far we can go with the models, how they will evolve as we change parameters/vectors/dependencies and the impact of that evolution on potential applications. It’s just too early.
I kinda get why I'm getting downvoted, honestly. The ChatGPT fanboys definitely give off an "NFT-grindset" kind of vibe, and they can be loud and overzealous with their prognosticating. It feels cathartic to make fun of the thing they've adopted as a centerpiece of their personality
None of that changes what is objectively the very real and very unexpected improvement these models are displaying, and we're still not sure what it is they're doing behind the curtain. "Predicting the next most likely word" is simply not a sufficient explanation for how these models seem to correctly interpret intent and apply factual knowledge stored in its dataset in abstract ways.
People want to squabble over anthropomorphic word choices and debate 'consiousness', and fair enough, its an interesting question. But that doesn't really come close to what's really interesting about the models gaining functionality when by all accounts they should only be 'guessing the next most likely word'.
I'm not really interested in debating people who are performatively unimpressed by these products, but it bothers me that those people continue rolling their eyes when significant advancements are made. Like sure, it's not new that ML algorithms can decode keystrokes from an audio recording, but it's a big deal when those models can be run on consumer grade hardware and not just a super computer run by a three letter agency.
Cool, Bill Gates has opinions. I think he's being hasty and speaking out of turn and only partially correct. From my understanding, the "big innovation" of GPT-4 was adding more parameters and scaling up compute. The core algorithms are generally agreed to be mostly the same from earlier versions (not that we know for sure since OpenAI has only released a technical report). Based on that, the real limit on this technology is compute and number of parameters (as boring as that is), and so he's right that the algorithm design may have plateaued. However, we really don't know what will happen if truly monster rigs with tens-of-trillions of parameters are used when trained on the entirety of human written knowledge (morality of that notwithstanding), and that's where he's wrong.
You got it the wrong way around. We already have a ton of compute and what this kind of AI can do is pretty cool.
But adding more compute power and parameters won't solve the inherent problems.
No matter what you do, it's still just a text generator guessing the next best word. It doesn't do real math or logic, it gets basic things wrong and hallucinates new fake facts.
Sure, it will get slightly better still, but not much. You can throw a million times the power at it and it will still fuck up in just the same ways.
This is short-sighted.
The jump to GPT 3.5 was preceded by the same general misunderstanding (we've reached the limit of what generative pre-trained transformers can do, we've reached diminishing returns, ECT.) and then a relatively small change (AFAIK it was a couple additional layers of transforms and a refinement of the training protocol) and suddenly it was displaying behaviors none of the experts expected.
Small changes will compound when factored over billions of nodes, that's just how it goes. It's just that nobody knows which changes will have that scale of impact, and what emergent qualities happen as a result.
It's ok to say "we don't know why this works" and also "there's no reason to expect anything more from this methodology". But I wouldn't dismiss further improvements as a forgone possibility.
This is exactly it. And it’s funny you’re getting downvoted.
We don’t truly know the depth of ML yet and how these general models could potential change when a few vectors in the equation change, and that’s the big unknown with it. I agree with you here that Gates’ opinion is just that and isn’t particularly well informed. Especially in comparison to what some of the industry and ML experts are saying about how far we can go with the models, how they will evolve as we change parameters/vectors/dependencies and the impact of that evolution on potential applications. It’s just too early.
I kinda get why I'm getting downvoted, honestly. The ChatGPT fanboys definitely give off an "NFT-grindset" kind of vibe, and they can be loud and overzealous with their prognosticating. It feels cathartic to make fun of the thing they've adopted as a centerpiece of their personality
None of that changes what is objectively the very real and very unexpected improvement these models are displaying, and we're still not sure what it is they're doing behind the curtain. "Predicting the next most likely word" is simply not a sufficient explanation for how these models seem to correctly interpret intent and apply factual knowledge stored in its dataset in abstract ways.
People want to squabble over anthropomorphic word choices and debate 'consiousness', and fair enough, its an interesting question. But that doesn't really come close to what's really interesting about the models gaining functionality when by all accounts they should only be 'guessing the next most likely word'.
I'm not really interested in debating people who are performatively unimpressed by these products, but it bothers me that those people continue rolling their eyes when significant advancements are made. Like sure, it's not new that ML algorithms can decode keystrokes from an audio recording, but it's a big deal when those models can be run on consumer grade hardware and not just a super computer run by a three letter agency.