Greg

joined 2 years ago
MODERATOR OF
[–] [email protected] 5 points 40 minutes ago

I'm not sure about -15C but don't operate a freezer at -15K as it violate the fundamental laws of thermodynamics.

[–] [email protected] 5 points 1 day ago

100%, the anti AI hype is as misinformed as the AI hype. We have so much work ahead of us to effectively utilize the current LLMs.

[–] [email protected] 8 points 1 day ago

I had a really good friend on MySpace that I lost touch with. I think he was a little paranoid, we didn't speak much and he was always looking over his shoulder. His name was Tom.

[–] [email protected] 4 points 1 day ago (1 children)

That is a very VC baiting title. But it's doesn't appear from the abstract that they're claiming that LLMs will develop to the complexity of AGI.

[–] [email protected] 3 points 1 day ago

Do you have a non paywalled link? And is that quote in relation to LLMs specifically or AI generally?

[–] [email protected] 64 points 2 days ago (7 children)

largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence

Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn't understand the technology were making those claims. Which helped feed the hype.

I 100% agree that we're going to see an AI market correction. It's going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.

[–] [email protected] 42 points 4 days ago

This is 100% OPs weird fetish but props to this hussle

[–] [email protected] 9 points 5 days ago

This is true on algorithmic platforms that reward "engagement" of any kind. I didn't get that sense from pre 2010 Facebook or Twitter and I don't get that sense on fediverse platforms.

[–] [email protected] 5 points 5 days ago

I'm not defending Sam Altman or the AI hype. A framework that uses an LLM isn't an LLM and doesn't have the same limitations. So the accurate media coverage that LLMs may have reached a plateau doesn't mean we won't see continued performance in frameworks that use LLMs. OpenAI's o1 is an example. o1 isn't an LLM, it's a framework that augments some of the deficiencies of LLMs with other techniques. That's why it doesn't give you an immediate streamed response when you use it, it's not just an LLM.

[–] [email protected] 37 points 5 days ago (2 children)

And they have a German accent. I can tell from the way they type.

[–] [email protected] 2 points 5 days ago

That's not Sam Altman saying that LLMs will achieve AGI. LLMs are large language models, OpenAI is continuing to develop LLMs (like GPT-4o) but they're also working on frameworks that use LLMs (like o1). Those frameworks may achieve AGI but not the LLMs themselves. And this is a very important distinction because LLMs are reaching performance parity so we are likely reaching a plateau for LLMs given the existing training data and techniques. There is still optimizations for LLMs like increasing context window sizes etc.

[–] [email protected] 2 points 5 days ago* (last edited 5 days ago) (6 children)

When has Sam Altman said LLMs will reach AGI? Can you provide a primary source?

 

Imagine trying to off load 40K in one dollar bluey coins 🤣

 
18
VFX1 Headgear (en.m.wikipedia.org)
 
339
Ohio pets (lemmy.ca)
 
 
 
 
 
 
167
New diet (lemmy.ca)
 
view more: next ›