this post was submitted on 28 Jun 2023
19 points (85.2% liked)

Selfhosted

40736 readers
439 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I currently have a 24/7 linux old-office-PC-turned-server for self-hosting, and a desktop for mostly programming and playing games (linux as a host + a windows VM with a passed-through GPU). The server's i5-3330 is usually at ~10-15% usage.

Here's the actual idea: what if, instead of having a separate server and desktop, I had one beefy computer that'd run 24/7 acting as a server and just spun up a linux or windows VM when I needed a desktop? GPUs and USB stuff would be passed through, and I could buy a PCIe SATA or NVMe controller I could also passthrough to not have to worry about virtualized disk overhead.

I'm almost certain I could make this work, but I wonder if it's even worth it - would it consume less power? What about damage to the components from staying powered 24/7? It'd certainly be faster accessing a NAS without the whole "Network-Attached" part, and powering on the desktop for remote access could just be a command over SSH instead of some convoluted remote WoL that I haven't bothered setting up yet.

I'd love to hear your thoughts on this.

Edit 2 months later: Just bought a 7950X3D and use the 3D V-cache half of it as a virtualized desktop with the other cores used for running the host and other VMs. Works perfectly when passing through a dedicated GPU, but iGPU passthrough is very difficult if not impossible since I couldn't manage it.

Edit even later-er: iGPU passthrough is possible on ryzen 7000 after all, everything works great now.

top 25 comments
sorted by: hot top controversial new old
[–] outcide 7 points 2 years ago (3 children)

Based on ancient memories, what kills hardware is temperature variations (expanding/contracting eventually breaks things). So I wouldn't worry about any damage to components from being left on 24/7, in fact might even be an improvement.

[–] [email protected] 2 points 2 years ago

wait people turn their stuff off?

[–] [email protected] 1 points 2 years ago

Alright, that's good to hear!

[–] SheeEttin 1 points 2 years ago

Yes. And for spinning disks, it's the start-stop that'll cause them to die. Already-spinning disks can't fail to spin up.

[–] Nibodhika 4 points 2 years ago

I wouldn't do that if you have the hardware to keep things separated but it's because of what I run on my server. You need to take into consideration that besides CPU/GPU usage a server's heavy load will be on Network and Disks, so while the system looks like it's not doing much, it might be doing a lot of Io operations. The thing with that is that both your disks and your network have a limited capacity, so if you're trying to play a game you might get longer load screens and higher ping that you would with one machine for each.

That being said it vastly depends on what you're running on the server, but higher pings and lower fps are a given since now you'll have more processes running on the background, so for games that are CPU bottlenecked it will be a massive hit.

[–] SheeEttin 4 points 2 years ago

People try it, and usually it ends up being more trouble than it's worth.

But you might set up a beefy computer and just use the whole thing as both a desktop and a server, and never turn it off.

[–] linearchaos 3 points 2 years ago

I do this with unraid. (Libvirt) VM overhead is real. I probably get 80% of frame rate compared to baremetal if I'm local. I never use it local though, it's in a rack in my basement.

I generally run parsec on it and remote it from a netbook. If you can get client and server wired the experience is mostly passable.

[–] vegetaaaaaaa 3 points 2 years ago

I used to do this for years (32GB RAM Ryzen 5 Debian box running as both desktop machine + libvirt hypervisor). I ended up migrating VMs to a separate physical host because I sometimes had to shut down/dual-boot to Windows for games, and I needed a few always-running services like my Mumble server - other than this specific problem, it worked flawlessly.

[–] BombOmOm 3 points 2 years ago* (last edited 2 years ago) (1 children)

I personally do something similar and have several VMs on my main computer that perform various functions. As they are not particularly resource intensive, I have never had an issue with it. I also went the lazier route and run games directly on the hypervisor, not in a VM.

For you, GPU passthrough is the main hurdle. It is surmountable, but it isn’t as simple as other parts of VM setups. If you can get that part working well, everything else should fall into place.

Also, for the sake of your own sanity, do not try to ‘share’ the GPU between the hypervisor and a VM. Use the onboard GPU for the hypervisor (or a baby add-in GPU if you don’t have onboard).

[–] [email protected] 3 points 2 years ago

I've already got GPU passthrough working on my current desktop, so I feel like it should be fine, hopefully. I'm also planning to have the host completely headless and basically act as if it was a full-on dedicated server, potentially passing an iGPU to a linux VM and the dGPU to windows at once. Thanks for the help though!

[–] huquad 3 points 2 years ago* (last edited 2 years ago) (1 children)

Definitely possible! But as the other commenter's have pointed out, there are some costs/tradeoffs to be aware of. I'll start by answering your questions. Power consumption could technically be less sharing a system due to less overhead (only one mobo, ram, etc), but power is mostly CPU/GPU, so I don't think you'd see a huge difference. Furthermore, always on VM vs sleeping/turning off when you're not using it should have marginal effects. Another commenter mentioned it, but always on isn't a problem. Sustained elevated drive temperatures can be an issue, but really you're looking at elevated CPU/GPU temps which won't be an issue. The bigger issue is temperature cycling, but even then consumer hardware is derated to last 10-20 years as long as you aren't doing overvoltage and you keep up with periodic repaste/repadding (every 5 years or so is typically recommended). Finally for turning on your VM, I'd recommend just leaving it on. Alternatively, you could send an ssh command as you stated.

Having a a hypervisor server with VMs is very common and well documented if you only want VNC/ssh. Regardless, any server maintenance/reboots will also obviously disrupt the desktop. Additionally, VNC doesn't support audio. I believe Windows remote desktop has audio, but I'm not sure about quality.

To get improved video/audio, you'll need a GPU. Once you add a GPU, things get trickier. First, your host/server will try to use the GPU. There are ways to force the host to not use the GPU, but that can be tricky. Alternatively, you can look into VFIO which hands off the GPU to the VM when turned on. However, this is even trickier. Last, you can install two GPUs (or use iGPU/APU if applicable). Then you can pass the GPU through. Last I looked, NVIDIA and AMD are both possible options and this is now easier than ever. Regardless, if you plan on gaming, you should know some games will block you for playing in a VM (specifically games with anticheat). All that said, desktop/server has some drawbacks, but is still a great option. Your next step is choosing your hypervisor.

[–] [email protected] 1 points 2 years ago (1 children)

Very helpful, thanks! I've got VFIO already figured out, and I haven't yet run into a game I wanted to but couldn't play because of anti-cheat with my current setup. For the hypervisor I'm thinking either libvirt with virt-manager or proxmox, but I'm leaning towards proxmox because it's web-based so I could have the host running completely headlessly and pass all GPUs to VMs.

[–] huquad 1 points 2 years ago

Nice! Sounds like you're all set then. I'd go proxmox for the reasons you've stated. Additionally, it appears to have pretty good support/community. Cheers

[–] fuckwit_mcbumcrumble 3 points 2 years ago (1 children)

VMware workstation works great. Just install Windows or linux on the box, workstation, and fire up all the VMs you could dream of, while using the PC as a normal PC.

But just know that any time you need to reboot your PC PC you need to reboot your server, which sucks. It's much better to just keep the old office PC. That old i5 uses so little power at idle vs a modern CPU being perpetually kept awake.

[–] vegetaaaaaaa 2 points 2 years ago (1 children)

VMWare Workstation is a Type 2 hypervisor and the performance is horrible.

[–] fuckwit_mcbumcrumble 2 points 2 years ago (1 children)

I can't find any benchmarks comparing workstation to ESXi. But for work we spend most of our time in type 2 hypervisors and performance is just fine. Just make sure you're not using the Windows Hypervisor Platform because that does have a huge performance penalty. Considering OP uses an old i5 I'm sure a modern CPU would handle the load just fine.

But importantly workstation has something ESXi doesn't, 3D Acceleration. And if you're doing anything graphical it makes a huge difference.

[–] vegetaaaaaaa 1 points 2 years ago

KVM/libvirt (type 1) blows VMWare Workstation out of the water, performance-wise (and license cost-wise since it's FOSS). I don't have benchmarks at hand, but expect something in the order of 20% I/O-wise, 10% CPU-wise, which quickly adds up. RAM usage impact should be negligible. Of course it depends on the workload, CPU-bound workloads will suffer less, but a lot of workloads are I/O-bound (databases for example).

VMWare reportedly has the best 3D acceleration support, yes. But it doesn't support hardware passthrough (which type 1 hypervisors do).

Of course if you're in Windows-land you don't have much choice (is Hyper-V slower than Workstation? Shouldn't be - but hey it's a microsoft product :) )

[–] [email protected] 2 points 2 years ago

as long as your desktop is in a vm and your server is in another you should be ok, desktop usage usually ends up needing more reboots and fiddling.

[–] [email protected] 2 points 2 years ago

I did something like this some years ago now. I had watched the “One PC, 2 Screens” (or something like that) From Linus Tech Tips, so I wanted to try it myself.

Using Unraid makes it easy to setup and I had 2 Windows VMs with their own screen and keyboard/mouse, dedicated GPU and then docker running my selfhosted stuff.

It uses more power though, because the PC is always on and the hardware needs to be beefy.

[–] world_hopper 1 points 2 years ago (1 children)

I've thought about this too. All the hypervisors I've wanted to test or use to try operating systems quickly require browser usage over the network.

Which hypervisors are people using for OP's type of use case? KVM?

[–] vegetaaaaaaa 2 points 2 years ago

I run libvirt + virt-manager for VMs on my desktop machine. My headless hypervisor box also runs libvirt but without the virt-manager graphical interface - I can still use virt-manager from my desktop to connect to/manage this headless libvirt instance.

[–] lwuy9v5 1 points 2 years ago

Could maybe look into Guacamole https://guacamole.apache.org/

[–] [email protected] 1 points 2 years ago (1 children)

Possible, but having a relatively big and noisy PC + UPS etc. right next to your desk is not so great.

[–] [email protected] 1 points 2 years ago* (last edited 2 years ago)

Right now I don't really have the budget for a UPS and the PC wouldn't be too different to a standard workstation hardware-wise except for a few extra disks. I've also got closed-back headphones which pretty much make that a non-issue anyways

load more comments
view more: next ›