this post was submitted on 08 Feb 2025
87 points (97.8% liked)

TechTakes

1620 readers
136 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 20 points 4 days ago (3 children)

i think you're missing the point that "Deepseek was made for only $6M" has been the trending headline for the past while, with the specific point of comparison being the massive costs of developing ChatGPT, Copilot, Gemini, et al.

to stretch your metaphor, it's like someone rolling up with their car, claiming it only costs $20 (unlike all the other cars that cost $20,000), when come to find out that number is just how much it costs to fill the gas tank up once

[–] [email protected] 7 points 3 days ago

Now im imagining GPUs being traded like old cars.

slaps GPU This GPU? perfectly fine, second hand yes, but only used to train one model, by an old lady, will run the upcoming monster hunter wilds perfectly fine.

[–] [email protected] 6 points 3 days ago (1 children)

DeepSeek-V3 costs only 2.788M GPU hours for its full training. Assuming the rental price of the H800 GPU is $2 per GPU hour, our total training costs amount to only $5.576M. Note that the aforementioned costs include only the official training of DeepSeek-V3, excluding the costs associated with prior research and ablation experiments on architectures, algorithms, or data.

Emphasis mine. Deepseek was very upfront that this 6m was training only. No other company includes r&d and salaries when they report model training costs, because those aren't training costs

[–] [email protected] 11 points 3 days ago* (last edited 3 days ago) (2 children)

consider this paragraph from the Wall Street Journal:

DeepSeek said training one of its latest models cost $5.6 million, compared with the $100 million to $1 billion range cited last year by Dario Amodei, chief executive of the AI developer Anthropic, as the cost of building a model.

you're arguing to me that they technically didn't lie -- but it's pretty clear that some people walked away with a false impression of the cost of their product relative to their competitors' products, and they financially benefitted from people believing in this false impression.

[–] [email protected] 2 points 2 days ago (1 children)

Okay I mean, I hate to somehow come to the defense of a slop company? But WSJ saying nonsense is really not their fault, like even that particular quote clearly says "DeepSeek said training one" cost $5.6M. That's just a true statement. No one in their right mind includes the capital expenditure in that, the same way when you say "it took us 100h to train a model" that doesn't include building a data center in those 100h.

Beside whether they actually lied or not, it's still immensely funny to me that they could've just told a blatant lie nobody factchecked and it shook the market to the fucking core wiping off like billions in valuation. Very real market based on very real fundamentals run by very serious adults.

[–] [email protected] 3 points 2 days ago* (last edited 2 days ago) (1 children)

i can admit it's possible i'm being overly cynical here and it is just sloppy journalism on Raffaele Huang/his editor/the WSJ's part. but i still think that it's a little suspect on the grounds that we have no idea how many times they had to restart training due to the model borking, other experiments and hidden costs, even before things like the necessary capex (which goes unmentioned in the original paper -- though they note using a 2048-GPU cluster of H800's that would put them down around $40m). i'm thinking in the mode of "the whitepaper exists to serve the company's bottom line"

btw announcing my new V7 model that i trained for the $0.26 i found on the street just to watch the stock markets burn

[–] [email protected] 4 points 2 days ago

but i still think that it’s a little suspect on the grounds that we have no idea how many times they had to restart training due to the model borking, other experiments and hidden cost

Oh ye, I totally agree on this one. This entire genAI enterprise insults me on a fundamental level as a CS researcher, there's zero transparency or reproducibility, no one reviews these claims, it's a complete shitshow from terrible, terrible benchmarks, through shoddy methodology, up to untestable and bonkers claims.

I have zero good faith for the press, though, they're experts in painting any and all tech claims in the best light possible like their lives fucking depend on it. We wouldn't be where we are right now if anyone at any "reputable" newspaper like WSJ asked one (1) question to Sam Altman like 3 years ago.

[–] [email protected] -2 points 3 days ago (1 children)

but it's pretty clear that some people walked away with a false impression of the cost of their product relative to their competitors' products

Ask yourself why that may be, as you are the one who posted a link to a WSJ article that is repeating an absurd 100m-1b figure from a guy who has a vested interest in making the barrier of entry into the field seem as high as possible the increase the valuation of his company. Did WSJ make an attempt to verify the accuracy of these statements? Did it push for further clarification? Did it compare those statements to figures that have been made public by Meta and OpenAI? No on all counts - yet somehow "deepseek lied" because it explicitly stated their costs didn't include capex, salaries, or R&D, but the media couldn't be bothered to read to the end of the paragraph

[–] [email protected] 6 points 3 days ago (1 children)

"the media sucks at factchecking DeepSeek's claims" is... an interesting attempt at refuting the idea that DeepSeek's claims aren't entirely factual. beyond that, intentionally presenting true statements that lead to false impressions is a kind of dishonesty regardless. if you mean to argue that DeepSeek wasn't being underhanded at all and just very innocently presented their figures without proper context (that just so happened to spurn a media frenzy in their favor)... then i have a bridge to sell you.

besides that, OpenAI is very demonstrably pissing away at least that much money every time they add one to the number at the end of their slop generator

[–] [email protected] 0 points 3 days ago (1 children)

No, it's not. OpenAI doesn't spend all that money on R&D, they spent majority of it on the actual training (hardware, electricity).

And that's (supposedly) only $6M for Deepseek.

So where is the lie?

[–] [email protected] 6 points 3 days ago* (last edited 3 days ago) (1 children)

shot:

majority of it on the actual training (hardware, ...)

chaser:

And that’s (supposedly) only $6M for Deepseek.

citation:

After experimentation with models with clusters of thousands of GPUs, High Flyer made an investment in 10,000 A100 GPUs in 2021 before any export restrictions. That paid off. As High-Flyer improved, they realized that it was time to spin off “DeepSeek” in May 2023 with the goal of pursuing further AI capabilities with more focus.

So where is the lie?

your post is asking a lot of questions already answered by your posting

[–] [email protected] -5 points 3 days ago (1 children)

SemiAnalysis is “confident”

They did not answer anything, only alluded.

Just because they bought GPUs like everyone else doesn't mean they could not train it cheaper.

[–] [email protected] 8 points 3 days ago

standard “fuck off programming.dev” ban with a side of who the fuck cares. deepseek isn’t the good guys, you weird fucks don’t have to go to a nitpick war defending them, there’s no good guys in LLMs and generative AI. all these people are grifters, all of them are gaming the benchmarks they designed to be gamed, nobody’s getting good results out of this fucking mediocre technology.