Based on ancient memories, what kills hardware is temperature variations (expanding/contracting eventually breaks things). So I wouldn't worry about any damage to components from being left on 24/7, in fact might even be an improvement.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
wait people turn their stuff off?
Alright, that's good to hear!
Yes. And for spinning disks, it's the start-stop that'll cause them to die. Already-spinning disks can't fail to spin up.
I wouldn't do that if you have the hardware to keep things separated but it's because of what I run on my server. You need to take into consideration that besides CPU/GPU usage a server's heavy load will be on Network and Disks, so while the system looks like it's not doing much, it might be doing a lot of Io operations. The thing with that is that both your disks and your network have a limited capacity, so if you're trying to play a game you might get longer load screens and higher ping that you would with one machine for each.
That being said it vastly depends on what you're running on the server, but higher pings and lower fps are a given since now you'll have more processes running on the background, so for games that are CPU bottlenecked it will be a massive hit.
People try it, and usually it ends up being more trouble than it's worth.
But you might set up a beefy computer and just use the whole thing as both a desktop and a server, and never turn it off.
I do this with unraid. (Libvirt) VM overhead is real. I probably get 80% of frame rate compared to baremetal if I'm local. I never use it local though, it's in a rack in my basement.
I generally run parsec on it and remote it from a netbook. If you can get client and server wired the experience is mostly passable.
I used to do this for years (32GB RAM Ryzen 5 Debian box running as both desktop machine + libvirt hypervisor). I ended up migrating VMs to a separate physical host because I sometimes had to shut down/dual-boot to Windows for games, and I needed a few always-running services like my Mumble server - other than this specific problem, it worked flawlessly.
I personally do something similar and have several VMs on my main computer that perform various functions. As they are not particularly resource intensive, I have never had an issue with it. I also went the lazier route and run games directly on the hypervisor, not in a VM.
For you, GPU passthrough is the main hurdle. It is surmountable, but it isn’t as simple as other parts of VM setups. If you can get that part working well, everything else should fall into place.
Also, for the sake of your own sanity, do not try to ‘share’ the GPU between the hypervisor and a VM. Use the onboard GPU for the hypervisor (or a baby add-in GPU if you don’t have onboard).
I've already got GPU passthrough working on my current desktop, so I feel like it should be fine, hopefully. I'm also planning to have the host completely headless and basically act as if it was a full-on dedicated server, potentially passing an iGPU to a linux VM and the dGPU to windows at once. Thanks for the help though!
Definitely possible! But as the other commenter's have pointed out, there are some costs/tradeoffs to be aware of. I'll start by answering your questions. Power consumption could technically be less sharing a system due to less overhead (only one mobo, ram, etc), but power is mostly CPU/GPU, so I don't think you'd see a huge difference. Furthermore, always on VM vs sleeping/turning off when you're not using it should have marginal effects. Another commenter mentioned it, but always on isn't a problem. Sustained elevated drive temperatures can be an issue, but really you're looking at elevated CPU/GPU temps which won't be an issue. The bigger issue is temperature cycling, but even then consumer hardware is derated to last 10-20 years as long as you aren't doing overvoltage and you keep up with periodic repaste/repadding (every 5 years or so is typically recommended). Finally for turning on your VM, I'd recommend just leaving it on. Alternatively, you could send an ssh command as you stated.
Having a a hypervisor server with VMs is very common and well documented if you only want VNC/ssh. Regardless, any server maintenance/reboots will also obviously disrupt the desktop. Additionally, VNC doesn't support audio. I believe Windows remote desktop has audio, but I'm not sure about quality.
To get improved video/audio, you'll need a GPU. Once you add a GPU, things get trickier. First, your host/server will try to use the GPU. There are ways to force the host to not use the GPU, but that can be tricky. Alternatively, you can look into VFIO which hands off the GPU to the VM when turned on. However, this is even trickier. Last, you can install two GPUs (or use iGPU/APU if applicable). Then you can pass the GPU through. Last I looked, NVIDIA and AMD are both possible options and this is now easier than ever. Regardless, if you plan on gaming, you should know some games will block you for playing in a VM (specifically games with anticheat). All that said, desktop/server has some drawbacks, but is still a great option. Your next step is choosing your hypervisor.
Very helpful, thanks! I've got VFIO already figured out, and I haven't yet run into a game I wanted to but couldn't play because of anti-cheat with my current setup. For the hypervisor I'm thinking either libvirt with virt-manager or proxmox, but I'm leaning towards proxmox because it's web-based so I could have the host running completely headlessly and pass all GPUs to VMs.
Nice! Sounds like you're all set then. I'd go proxmox for the reasons you've stated. Additionally, it appears to have pretty good support/community. Cheers
VMware workstation works great. Just install Windows or linux on the box, workstation, and fire up all the VMs you could dream of, while using the PC as a normal PC.
But just know that any time you need to reboot your PC PC you need to reboot your server, which sucks. It's much better to just keep the old office PC. That old i5 uses so little power at idle vs a modern CPU being perpetually kept awake.
VMWare Workstation is a Type 2 hypervisor and the performance is horrible.
I can't find any benchmarks comparing workstation to ESXi. But for work we spend most of our time in type 2 hypervisors and performance is just fine. Just make sure you're not using the Windows Hypervisor Platform because that does have a huge performance penalty. Considering OP uses an old i5 I'm sure a modern CPU would handle the load just fine.
But importantly workstation has something ESXi doesn't, 3D Acceleration. And if you're doing anything graphical it makes a huge difference.
KVM/libvirt (type 1) blows VMWare Workstation out of the water, performance-wise (and license cost-wise since it's FOSS). I don't have benchmarks at hand, but expect something in the order of 20% I/O-wise, 10% CPU-wise, which quickly adds up. RAM usage impact should be negligible. Of course it depends on the workload, CPU-bound workloads will suffer less, but a lot of workloads are I/O-bound (databases for example).
VMWare reportedly has the best 3D acceleration support, yes. But it doesn't support hardware passthrough (which type 1 hypervisors do).
Of course if you're in Windows-land you don't have much choice (is Hyper-V slower than Workstation? Shouldn't be - but hey it's a microsoft product :) )
as long as your desktop is in a vm and your server is in another you should be ok, desktop usage usually ends up needing more reboots and fiddling.
I did something like this some years ago now. I had watched the “One PC, 2 Screens” (or something like that) From Linus Tech Tips, so I wanted to try it myself.
Using Unraid makes it easy to setup and I had 2 Windows VMs with their own screen and keyboard/mouse, dedicated GPU and then docker running my selfhosted stuff.
It uses more power though, because the PC is always on and the hardware needs to be beefy.
I've thought about this too. All the hypervisors I've wanted to test or use to try operating systems quickly require browser usage over the network.
Which hypervisors are people using for OP's type of use case? KVM?
I run libvirt + virt-manager for VMs on my desktop machine. My headless hypervisor box also runs libvirt but without the virt-manager graphical interface - I can still use virt-manager from my desktop to connect to/manage this headless libvirt instance.
Possible, but having a relatively big and noisy PC + UPS etc. right next to your desk is not so great.
Right now I don't really have the budget for a UPS and the PC wouldn't be too different to a standard workstation hardware-wise except for a few extra disks. I've also got closed-back headphones which pretty much make that a non-issue anyways