this post was submitted on 14 Nov 2024
586 points (98.5% liked)
memes
10637 readers
2046 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to [email protected]
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
Sister communities
- [email protected] : Star Trek memes, chat and shitposts
- [email protected] : Lemmy Shitposts, anything and everything goes.
- [email protected] : Linux themed memes
- [email protected] : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Isn't that a bit like buying an old truck instead of a year old Miata?
Afaik those CPUs use so much juice when idling ... sure, you dont get all them lanes or ECC, but a PC at the same price with a few year old CPU outclasses that CPU by a lot & at a fraction of the running cost (also quietly).
Just something to keep in mind as an alternative, especially when you don't intend to fill all the pcie bussy (several users with several intensive tasks that benefit from wider bus to RAM & PCI even with a slow CPU).
Ok, and you miss out on some fancy admin stuff, but ... it's just for home use ...
Yeah server hardware isn't the most efficient if you want to save power. It's probably better to get a NUC or something.
With that said my old Dell PowerEdge R730 only uses around 84 watt (running around 5 VMs that are doing pretty much nothing) The server runs Proxmox and has 128 GB of ram, two Xeon E5-2667 v4 CPUs, 4 old used 1 TB HDDs I bought for cheap, and 4 old used 128 GB SATA SSDs I also bought for cheap (all storage is 2,5 drives).
All I had to do was change a few BIOS settings to prioritize efficiency over performance. 84 watts is obviously still not great but it's not that bad.
Sounds nice, but yes, uses quite a bit of power.
I should measure mine - I have a Ryzen 5900 (24t, 64MB ... some 20k cinebench score) as the main, and a Core 12700 (16+4t, 12MB).
(And Intel gen 7 and 2 at my patents. All of them proxmoxed.)
Never ever managed to bottleneck anything on them, not really, but got them super cheap used.
Buying anything server/enterprise that powerful would cost me a lot of moneys. And prob have two CPUs which doubles a lot of power hungry bits.
The only reason that I have measured my server is that it has that feature built into the iDRAC. I have been thinking of buying an external power meter for years but have never bothered to do that.
Luckily I got my server for free from work. It was part of an old SAN so it came with 4 dual 16 Gbit fiber channel cards and 2 dual 10 gigabit ethernet cards. Before I took those out of the server it consumed around 150 watts at idle which is crazy.
I always recommend buying enterprise grade hardware for this type of thing, for two reasons:
Consumer-grade hardware is just that - it's not built for long-term, constant workloads (that is, server workloads). It's not built for redundancy. The Dell PowerEdge has hotswappable drive bays, a hardware RAID controller, dual CPU sockets, 8 RAM slots, dual built-in NICs, the iDrac interface, and redundant hot-swappable PSUs. It's designed to be on all the time, reliably, and can be remotely managed.
For a lot of people who are interested in this, a homelab is a path into a technology career. Working with enterprise hardware is better experience.
Consumer CPUs won't perform server tasks like server CPUs. If you want to run a server, you want hardware that's built for server workloads - stability, reliability, redundancy.
So I guess yes, it is like buying an old truck? Because you want to do work, not go fast.
Is this mythology? :P
Server stuff is unusual and mysterious, rare, and expensive - I get the allure.
I like your second point (tho wouldn't say a lot, most of us just want services at home + ProxMox or even Linux in general isn't the most common hypervisor to learn for getting a job in like mid-sized companies), but for the rest - PC can take loads just as well as enterprise/server, this isn't the 90s or early 2000s when you eg got shitty capacitors on even the best consumer mobos. Your average second gen Core PC could run non-stop since it's birth to today.
The exception are hard drives, which homelabbers buy enterprise anyways.
BTW - who has their home lab on full load all the time (not sarcasm, actually asking for usecases)?
The rest is just additional equipment one might need or might not. A second CPU slot is irrelevant when buying old servers, ram slots need to be filled to even take advantage of the extra lanes of server CPUs and even then older tech might still be slower than dual channel ddr5, drive bays are cheap to buy ... but if you want nicely looking hot-swappable PSU then you need a server/workstation case.
Server vs consumer CPUs mostly differ in how well they can parallelize tasks, mostly by having more cores and more lanes. But if a modern CPU core outclases older server CPU cores like 10:1 that logic just doesn't add up anytime. Both do the same work.
Imho old servers aren't super cheap but are priced accordingly.
I think this whole debate consumer vs enterprise hardware (except hard drives ofc) can be summed in a proxy question of - do homelabbers need registered ECC RAM?