this post was submitted on 28 Jul 2023
18 points (95.0% liked)

Selfhosted

40006 readers
979 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I currently have a server running Arch Linux and Jellyfin, one Raspberry Pi 4 running NextCloudPi and one Raspi running Pi-hole. Eventually I want to host all and more services on one maschine.
I thought about using Proxmox and Docker, but I'm not sure what the ideal setup would look like. For now I thought I use Proxmox and a simple Debian VM which I run Docker on and running Portainer, Pi-hole, Nextcloud, a reverse proxy and Jellyfin as Docker containers?
Is that a smart setup? It gives me the ease of using Docker and a easy way of creating backups of single applications or the whole VM, leaving me with the possibility to add container or VMs for various other services, for testing etc. Or should I just use LXC for said applications?
Any guidance would be appreciated!

EDIT: In case my comment was overlooked. Thanks for all your comments, I'll see how I implement things when I get the time to reinstall my server.

top 10 comments
sorted by: hot top controversial new old
[–] [email protected] 8 points 1 year ago (1 children)

Proxmox, Nextcloud, and Jellyfin user here. My setup separates groups of services into their own VMs. Docker is just another way to package and deploy applications by simplifying the process.

So Nextcloud and Jellyfin get their own VMs, and I deploy the applications via Docker on the separate VMs. If you want to utilize Portainer, you can deploy an agent to each of these VMs.

Lightweight applications I typically deploy to separate LXC containers. Portainer, Pi-hole, NGINX would all get separate LXC containers. You can connect to the other VM Portainer agents from the LXC Portainer server.

[–] mountainCalledMonkey 1 points 1 year ago

Second this - i tend to follow the same scheme

[–] grue 5 points 1 year ago

I run NextCloud and Jellyfin (more specifically, TurnKey Nextcloud and TurnKey MediaServer) on Proxmox as LXC containers. I don't know if that's good (I'm a noob too), but it seems reasonable?

This might also be a misunderstanding on my part, but the way I see it, if you're going to run exclusively Docker containers, do you really need Proxmox (as opposed to Docker just running directly on a physical machine, or a Kubernetes cluster or something)?

Don't get me wrong: I do have a Docker VM in Proxmox; it's just that my order of preference for how to run any particular service goes LXC -> VM -> Docker instead of the other way around. LXCs come first because they're lighter than VMs, and both come before Docker containers because they can be managed directly in the Proxmox UI instead of having to use a different tool. I use Docker only for software like Traefik, where the documentation makes it clear that Docker is the preferred/best-supported deployment method.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

After spending a week working through the intricacies of running it in a vm, lxc, I settled on a privileged LXC container

It was so much simpler to get the quick sync hardware transcoding working, and it just seems so much faster in LXC. Also, the host GPU can be shared across multiple LXC containers

I just run a weekly backup for the LXC using Proxmox backup to an NFS share on the NAS

[–] SlothCroissant 2 points 1 year ago

I ran this way forever. I also did this but with kube. Works super well, you can pass through any hardware you need (GPU for Plex, for example). It also allows you to control allocation of resources if you’ve got other things on the server and don’t want Docker running wild for any reason (transcoding Plex, for example).

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I'd recommend against Jellyfin in a VM if you need hardware transcoding, although YMMV GPU wise.

I don't think it's unreasonable to run your docker containers on your Proxmox host or you can use LXC, since neither of these need to deal with GPU passthrough. Yeah, you'll lose the full VM backup stuff with docker but containers are easy to backup if you configure them properly.

[–] VelociCatTurd 2 points 1 year ago

I put just about everything I can in docker containers, running on a VM in proxmox. However, I do run pi-hole in its own VM, for some reason it just kept stopping when in Docker, did not run very well. But I also don’t use LXC containers. It’s either docker or VMs.

[–] 88reaper 2 points 1 year ago

I use a mix debain VM with portainer and some lxc to basically do what your asking....works great ``

[–] [email protected] 1 points 1 year ago

Thanks for all your comments. Seems like either way is fine. :)

[–] [email protected] 1 points 1 year ago

Don't install docker in a LXC when proxmox runs on ZFS (which it has to when you want to set up a HA cluster with VM migration capabilities). Also you'll run into problems with file access rights when using NFSv4 ACLs (instead of chmod) on the datasets. If you wanna store and share a lot of data, maybe look into using TrueNAS SCALE as the Hypervisor.

load more comments
view more: next ›