this post was submitted on 04 Nov 2024
10 points (85.7% liked)

Selfhosted

40702 readers
460 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hello everybody, happy Monday.

I'm hoping to get a little help with my most recent self-hosting project. I've created a VM on my Proxmox instance with a 32GB disk and installed Ubuntu, Docker, and CosmOS to it. Currently I have Gitea, Home Assistant, NextCloud, and Jellyfin installed via CosmOS.

If I want to add more services to Cosmos, then I need to be able to move the containers from the VM's 32GB disk into an NFS Share mounted on the VM which has something like 40TB of storage at the moment. My hope is that moving these Containers will allow them to grow on their own terms while leaving the OS disk the same size.

Would some kind of link allow me to move the files to the NFS share while making them still appear in their current locations in the host OS (Ubuntu 24.04). I'm not concerned about the NFS share not being available, it runs on the same server virtualizing everything else and it's configured to start before everything else so the share should be up and running by the time the server is in any situation. If anyone can see an obvious problem with that premise though, I'd love to hear about it.

top 12 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 1 month ago* (last edited 1 month ago) (1 children)

If you want to move your containers to a different location, look into configuring docker's data-root: https://stackoverflow.com/questions/24309526/how-to-change-the-docker-image-installation-directory

You copy /var/lib/docker to a new location and update /etc/docker/daemon.json

I will say: Moving data-root to an NFS mount isn't going to work well. I've tried it, and docker containers rely on filesystem features to run their overlays. On an NFS, this feature isn't present, so your services will duplicate the container's entire filesystem. This will tank your performance and is basically unusable for anything but trivial examples. Docker data-root basically needs to be a "physical" disk.

I've had no issues using NFS shares mounted as docker volumes. It's just the data-root where it'll fail.

[–] stardustsystem 2 points 1 month ago

That tracks with my experience as well. I've been trying to get a system set up where the OS and Docker live on a small disk by themselves, and then go out to the larger RAID Array to load its data. But it's sounding like that's not really going to work the way I want to (probably why it's crashed on me so many times, too).

[–] ramielrowe 5 points 1 month ago (2 children)

In general, container root filesystems and the images backing them will not function on NFS. When deploying containers, you should be mounting data volumes into the containers rather than storing things on the container root filesystems. Hopefully you are already doing that, otherwise you're going to need to manually copy data out of the containers. Personally, if all you're talking about is 32 gigs max, I would just stop all of the containers, copy everything to the new NFS locations, and then re-create the containers to point at the new NFS locations.

All this said though, some applications really don't like their data stored on NFS. I know Plex really doesn't function well when it's database is on NFS. But, the Plex media directories are fine to host from NFS.

[–] [email protected] 1 points 1 month ago

The key with NFS is performance. Its not that you can't store disk dependent apps on NFS. You can but you need to be aware of latency and speed.

[–] stardustsystem 1 points 1 month ago (1 children)

I'm still early enough in that if something's wrong or not ideal about the config, I can go scorched earth and have the whole thing back up and running in an hour or two.

Is there a better filesystem that I could share out for this kind of thing? My RAID Array is run through OpenMediaVault if that helps.

[–] ramielrowe 1 points 1 month ago

Docker's documentation for supported backing filesystems for container filesystems.

In general, you should be considering your container root filesystems as completely ephemeral. But, you will generally want low latency and local. If you move most of your data to NFS, you can hopefully just keep a minimal local disk for images/containers.

As for your data volumes, it's likely going to be very application specific. I've got Postgres databases running off remote NFS, that are totally happy. I don't fully understand why Plex struggles to run it's Database/Config dir from NFS. Disappointingly, I generally have to host it on a filesystem and disk local to my docker host.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago) (1 children)

If you have your services in compose files, you can point the volumes to anywhere. Even NFS.

Like the below config

volumes:
   pihole01:
    driver: local
    driver_opts:
      type: 'none'
      o: 'bind'
      device: '/mnt/data/docker/pihole01'
[–] stardustsystem 1 points 1 month ago (1 children)

So CosmOS does run through Compose files, but it makes them on the fly and gives you a moment before runtime to review it and make any changes.

Am I understanding right that your idea here is to put the Volumes on the NFS share and run through that, as opposed to having the data outside of a Volume just sitting on an NFS Mount?

[–] [email protected] 1 points 1 month ago

Thats how I had my setup run, point the volume at NFS

[–] just_another_person 1 points 1 month ago (1 children)

Your goal isn't super clear from the post.

Are you asking how to host an OS on an NFS share?

[–] stardustsystem 1 points 1 month ago (1 children)

So I have a 2TB nVME for VM Host Disks, and a 72TB RAID Array on my server. My hope is to have the OS and Docker on the 32GB drive I set up for the VM (which lives on the nVME), and then all the files related to the webapps live in a folder on RAID Array in a section meant just for that.

But the other responses in this thread make me think that's not really going to be an option. Maybe I could make a very large VM Host Disk and put it on the RAID Array, let Docker just forget about the mount points entirely...

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

Have you looked into iscsi?

Alternatively you could run bare metal Proxmox and then add NFS as guest storage.

Keep in mind you may be limited by network. If you really want fast performance you need the proper hardware.