this post was submitted on 28 Feb 2025
95 points (99.0% liked)

TechTakes

1652 readers
63 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 31 comments
sorted by: hot top controversial new old
[–] [email protected] 34 points 2 days ago (1 children)

The bubble is bursting, i hope Nvidia implodes in a never before seen spectacular fashion.

[–] [email protected] 8 points 2 days ago (1 children)

You are being overzealous, the bubble ain't gonna pop for a long while imo

[–] [email protected] 13 points 2 days ago* (last edited 2 days ago) (2 children)

Nah the funding is starting to dry up already. Big players including microsoft are cancelling plans. Ofcourse the scams keep going for a while but the peak is in the past.

[–] Blue_Morpho 2 points 2 days ago

The investment bubble will pop the way it did for the Internet back in 2000. It doesn't mean the Internet went away or even that it stopped growing.

[–] [email protected] -2 points 2 days ago* (last edited 1 day ago) (6 children)

I don't think the bubble will pop as long as "Ai" keeps advancing at the pace it is. LLMs, text to photo, and text to video models are getting ~~exponentially~~ better year over year. I really don't think the hype train is going to slow down until the rate of progress slows as well, so far there aren't an indications the rate of progress is going to slow.

Wild guess here that I'm sure you and others will disagree with, even when the bubble "pops" it won't really be a pop but more of a downturn that doesn't actually hurt any of the big players significantly.

Only time will tell, it will certain be interesting to watch as an outsider no-matter what.

Edit: As many have pointed out, using the word exponential was incorrect. Although I still stand by that I don't think the bubble's going to pop anytime soon, and these models are getting significantly better year over year. I would argue that text-to-video models have actually had exponential improvements, at least in the past year or two, but the other category's, you're alright, not so much.

[–] [email protected] 14 points 2 days ago* (last edited 2 days ago)

LLMs, text to photo, and text to video models are getting exponentially better year over year.

it is 2025 how are you still saying this shit

picture of cat looking very tired

[–] [email protected] 20 points 2 days ago

LLMs, text to photo, and text to video models are getting logarithmically better year over year.

[–] [email protected] 15 points 2 days ago (1 children)

This article is literally about the fact that progress has stagnated...

There are clearly fundamental issues with the approach they have been using for LLMs. There is no new data left, the entire internet has been scraped already. All thats left is incremental improvements in the way they process the data.

[–] [email protected] 11 points 2 days ago

In my experience they've significantly tailed off over the past year, exponential growth would mean the amount they get better per unit time increases over time. What has gotten better is our ability to run the same level of things on cheaper hardware with less power, again just in my limited experience. (Also this is not the definition of exponential growth, just a property of it. Polynomial growth has the same property)

[–] [email protected] 6 points 2 days ago (1 children)

The Information's Dealmaker newsletter is talking today about downrounds in AI venture funding

[–] [email protected] 8 points 2 days ago

downrounds in AI venture funding

sickos meme image

[–] Alphane_Moon 1 points 2 days ago* (last edited 1 day ago)

Has cloud LLM quality really improved exponentially in the past 12-18 months?

I use a mix of local and cloud LLMs and to be honest my use cases are relatively simple, so it's difficult for me to say.

There is also the issue of running out of real training data.

Not necessarily disagreeing with you (just look how long crypto has lasted and they've never been able to go beyond degenerate financial speculation and criminal activities).

[–] [email protected] 15 points 2 days ago

Ed Zitron:

Sam Altman is talking about bringing online "tens of thousands" and then "Hundreds of thousands" of GPUs. 10,000 GPUs costs them $113 million a year, 100k $1.13bn, so this is Sam Altman committing to billions of dollars of compute for an expensive model that lacks any real new use cases. Suicide.

Also, $1.30 per hour per GPU is the Microsoft discount rate for OpenAI. Safe to assume there are other costs but raw compute for GPT 4.5 is massive and committing such resources at this time is truly fatalistic, and suggests Altman has no other cards to play

[–] [email protected] 13 points 2 days ago

in the past I have worked in the product cycle at a silicon valley company a little bit and I just can't imagine anyone I met ever getting to the next step being "tell the customers the new product is marginally better at 15-30x the current price" and then actually doing that instead of cancelling releases, fire-drill meetings all week to change paths as soon as possible, etc. unless they already did that?

"bonkers" is right.

[–] [email protected] 23 points 2 days ago (1 children)

Investors poured completely insane amounts of money into thd endless money pit that is ClosedAI. then they realize betting everything on one horse was really stupid, since they have zero competitve advantage.

now they try to get as much loot off the sinking ship as possible lmao

they'll probably exit scam soon

[–] [email protected] 17 points 2 days ago (1 children)

it's been an ongoing exit scam

OpenAI buys services from a pile of Altman's other portfolio companies

[–] [email protected] 13 points 2 days ago (1 children)

OpenAI buys services from a pile of Altman’s other portfolio companies

which reminds me of one of my actual favourite parts of the bayfuckers playacting building companies: how absolutely self-cycling a lot of the funding ends up being. shartups burning fucking piles of money on other, also-VC-funded, shartups. totally normal and healthy way for money to flow.

[–] [email protected] 14 points 2 days ago

it's not startup gambling if it's embezzlement

[–] homesweethomeMrL 15 points 2 days ago

What's amazing is how long it's been at an insane level of hype and there isn't a single product that's very popular.

There's niche products, there's products that a lot of people use sometimes or that they don't mind but there's nothing in existence that a lot of people care about. Much less one a lot of people pay for.

The largest voice about the whole thing is how terrible it is. It's just bonkers.

[–] [email protected] 3 points 1 day ago

I love openai model names. 3.5 wasn't enough so you need turbo but turbo is is nothing compared to 4.0 but you can mini version for free and now it's just 4.5. I still use 3.5 turbo api

[–] [email protected] 9 points 2 days ago (2 children)
[–] [email protected] 18 points 2 days ago

Former OpenAI researcher Andrej Karpathy wrote on X that GPT-4.5 is better than GPT-4o but in ways that are subtle and difficult to express. "Everything is a little bit better and it's awesome," he wrote, "but also not exactly in ways that are trivial to point to."

plebeian, you don't understand, you're sniffing our farts wrong

[–] [email protected] 10 points 1 day ago

And GPT-4.5 is terrible for coding, relatively speaking, with an October 2023 knowledge cutoff that may leave out knowledge about updates to development frameworks.

This is in no way specific to GPT4.5 but remains a weirdly undermentioned albatross about the neck of the entire LLM code-guessing field, probably because the less you know about what you told it to generate the likelier you are to think it's doing a good job, and the enthusiastically satisfied customer reviews in social media that I've interacted with certainly seemed to skew toward less-you-know types.

Even when the up-to-date version release happened before the cut-off point you are probably out of luck, since the newer version is likely way underrepresented in the training data compared to the previous versions that people may have been using for years by that point.

[–] homesweethomeMrL 11 points 2 days ago (1 children)
[–] [email protected] 10 points 2 days ago

Gromit facepalming (facepawing?) could be the techtakes mascot.

[–] [email protected] 10 points 2 days ago

Ed Zitron's podcast this week was discussing this very thing. The bubble is going to break eventually.

[–] [email protected] 0 points 2 days ago

Been using deepseek R1 a lot during February and it's been consistently better at giving me what I want on the first go than any of the other models.

OpenAI is stuffed.