this post was submitted on 09 Feb 2025
70 points (83.0% liked)

Technology

62028 readers
4889 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Overall, when tested on 40 prompts, DeepSeek was found to have a similar energy efficiency to the Meta model, but DeepSeek tended to generate much longer responses and therefore was found to use 87% more energy.

top 19 comments
sorted by: hot top controversial new old
[–] [email protected] 19 points 1 day ago (1 children)

The FUD is hilarious. Even an llm would tell you the article compares apples and oranges... FFS.

[–] [email protected] 2 points 1 day ago

You might think this is apples and oranges, but I think it's just another dimension: whether it's better to have quality and bountiful output, or if such gains are eclipsed by the far wider appeal and adoption of such technologies. Just like how the cotton gin's massive efficiency and yield increase in turning harvested cotton into clothing filling skyrocketed the harvesting of cotton.

[–] [email protected] 28 points 1 day ago (4 children)

That’s kind of a weird benchmark. Wouldn’t you want a more detailed reply? How is quality measured? I thought the biggest technical feats here were ability to run reasonably well in a constrained memory settings and lower cost to train (and less energy used there).

[–] [email protected] 2 points 16 hours ago

This is more about the "reasoning" aspect of the model where it outputs a bunch of "thinking" before the actual result. In a lot of cases it easily adds 2-3x onto the number of tokens needed to be generated. This isn't really useful output. It the model getting into a state where it can better respond.

[–] jacksilver 4 points 1 day ago

Longer!=Detailed

Generally what they're calling out is that DeepSeek currently rambles more. With LLMs the challenge is how to get the right answer most sussinctly because each extra word is a lot of time/money.

That being said, I suspect that really it's all roughly the same. We've been seeing this back and forth with LLMs for a while and DeepSeek, while using a different approach, doesn't really break the mold.

[–] [email protected] 8 points 1 day ago

The benchmark feels just like the referenced Jevons Paradox to me: Efficiency gains are eclipsed with a rise in consumption to produce more/better products.

[–] [email protected] 5 points 1 day ago

More detailed and accurate reply is preferred, but length isn't a quantifier for that. If anything that's the problem with most LLMs, they tend to ramble a bit more than they need to, and it's hard (at least with just prompting) to rein that in to narrow the answer to just the answer.

[–] [email protected] 12 points 1 day ago (1 children)

And here I thought that the energy consumption was in the training.

[–] [email protected] 1 points 1 day ago

The issue might be that the energy it saves in training is offset by its more intensive techniques for answering questions, and by the long answers they produce.

[–] [email protected] 12 points 1 day ago (1 children)

The original claims of energy efficiency came from mixing up the energy usage of their much smaller model with their big model I think.

[–] [email protected] 21 points 1 day ago (1 children)

This article is comparing apples to oranges here. The deepseek R1 model is a mixture of experts, reasoning model with 600 billion parameters, and the meta model is a dense 70 billion parameter model without reasoning which preforms much worse.

They should be comparing deepseek to reasoning models such as openai's O1. They are comparable with results, but O1 cost significantly more to run. It's impossible to know how much energy it uses because it's a closed source model and openai doesn't publish that information, but they charge a lot for it on their API.

Tldr: It's a bad faith comparison. Like comparing a train to a car and complaining about how much more diesel the train used on a 3 mile trip between stations.

[–] [email protected] 1 points 1 day ago (1 children)

It's more like comparing them while they use the same fuel (as the article directly compares them in joules): Let's say the train also uses gasoline. The car is a far more "independent", controllable, and "doesn't waste fuel driving to places you don't want to go" and thus seen as "better" and more appealing, but that wide appeal and thus wide usage creates far more demand for gasoline, dries up the planet, and clogs up the streets, wasting fuel idling at traffic stops.

[–] [email protected] 2 points 1 day ago (1 children)

Yeah, I was thinking diesel powered trains

[–] [email protected] 1 points 1 day ago (1 children)

The AI models use the same fuel for energy.

[–] [email protected] 3 points 1 day ago* (last edited 1 day ago)

Yes, sorry, where I live it's pretty normal for cars to be diesel powered. What I meant by my comparison was that a train, when measured uncritically, uses more energy to run than a car due to it's size and behavior, but that when compared fairly, the train has obvious gains and tradeoffs.

Deepseek as a 600b model is more efficient than the 400b llama model (a more fair size comparison), because it's a mixed experts model with less active parameters, and when run in the R1 reasoning configuration, it is probably still more efficient than a dense model of comparable intelligence.