100%, the anti AI hype is as misinformed as the AI hype. We have so much work ahead of us to effectively utilize the current LLMs.
I had a really good friend on MySpace that I lost touch with. I think he was a little paranoid, we didn't speak much and he was always looking over his shoulder. His name was Tom.
That is a very VC baiting title. But it's doesn't appear from the abstract that they're claiming that LLMs will develop to the complexity of AGI.
Do you have a non paywalled link? And is that quote in relation to LLMs specifically or AI generally?
largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence
Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn't understand the technology were making those claims. Which helped feed the hype.
I 100% agree that we're going to see an AI market correction. It's going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.
This is 100% OPs weird fetish but props to this hussle
This is true on algorithmic platforms that reward "engagement" of any kind. I didn't get that sense from pre 2010 Facebook or Twitter and I don't get that sense on fediverse platforms.
I'm not defending Sam Altman or the AI hype. A framework that uses an LLM isn't an LLM and doesn't have the same limitations. So the accurate media coverage that LLMs may have reached a plateau doesn't mean we won't see continued performance in frameworks that use LLMs. OpenAI's o1 is an example. o1 isn't an LLM, it's a framework that augments some of the deficiencies of LLMs with other techniques. That's why it doesn't give you an immediate streamed response when you use it, it's not just an LLM.
And they have a German accent. I can tell from the way they type.
That's not Sam Altman saying that LLMs will achieve AGI. LLMs are large language models, OpenAI is continuing to develop LLMs (like GPT-4o) but they're also working on frameworks that use LLMs (like o1). Those frameworks may achieve AGI but not the LLMs themselves. And this is a very important distinction because LLMs are reaching performance parity so we are likely reaching a plateau for LLMs given the existing training data and techniques. There is still optimizations for LLMs like increasing context window sizes etc.
When has Sam Altman said LLMs will reach AGI? Can you provide a primary source?
I'm not sure about -15C but don't operate a freezer at -15K as it violate the fundamental laws of thermodynamics.