this post was submitted on 25 Aug 2024
38 points (97.5% liked)

Selfhosted

40304 readers
480 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I have a decent 2 bay synology, but want to put all my docker images/ VMs running on a more powerful machine connected to the same LAN. Does it ever make sense to do the for media serving or will involving an extra device add too much complexity vs just serving from the NAS itself. I was hoping to have calibre/home assistant/tube type services, etc. all running off a mini PC with a Ryzen 7 and 64gb ram vs the NAS.

My Linux knowledge is intermediate; my networking knowledge is begintermediate, and I can generally follow documentation okay even if it's a bit above my skill level.

top 14 comments
sorted by: hot top controversial new old
[–] [email protected] 19 points 2 months ago* (last edited 2 months ago) (2 children)

Generally it's simpler if you have your NAS separate from your application server. Synology runs NAS really well, but a separate application server for docker/etc is a lot easier to use and easier to upgrade than running on Synology. Your application server can even have a GPU for media transcoding or AI processing. Trying to do everything on one box makes things more complicated and fragile.

I would recommend something like Debian or NixOS for the application server, and you should be able to manage it over SSH. You can then mount your NAS as an NFS share, and then run all your applications in Docker or NixOS, using the NAS to store all your state.

[–] njordomir 7 points 2 months ago (1 children)

This answers my question. I wasn't sure if the server would have to download the whole file from the NAS prior to serving it.

I run my Nextcloud on Debian, ran Debian based distros for a few years, and I've done nfs on my synology with my laptop. I might be able to do it!

Wish me luck, and thanks for responding.

[–] monkeyman512 3 points 2 months ago* (last edited 2 months ago)

Your biggest potential bottle neck is if your NAS and App server only have a single 1g network port. This may not be a problem depending on your usage, but it is a important consideration to keep in mind.

[–] [email protected] 1 points 2 months ago (1 children)

I have an old midi-tower standing around with everything inside but drives.

Is it stupid to just set up the drives as zfs inside the case and let my docker services run on the same machine (as long as there is enough RAM etc. of course)?

Or should I get another PC as application server?

[–] [email protected] 3 points 2 months ago

If you're not using something like synology, it isn't really an issue to run applications and nas on the same machine. I would generally recommend separating them so you have more options in the future if you want to run muliple servers for HA or expansion, but it should be fine either way. It is worth noting that quad core N100 computers are like $150 on aliexpress if you want a cheap application server(s).

[–] [email protected] 7 points 2 months ago

I personally have them be the same device, but I have a DIY NAS, so my specs are already way overkill for regular NAS duty (it's my old desktop PC).

Assuming your home network is fast, you should be fine to split them up. I personally designed my setup to make it easy to move things around should I decide to. I use Docker containers for everything, Caddy for TLS, and HAProxy set up at the edge to route based on domain, so moving a service to another device is just:

  1. copy relevant docker compose and Caddy config to new machine
  2. set up network mounts for anything the containers need
  3. point HAProxy (and my router DNS) to the new address
  4. test

I don't have to remember where any of the config files are since they all live next to the compose file. I also don't need to forget which directories need to be mounted because it's already listed in the compose file.

So as long as you make it easy for yourself to move things around, it really doesn't matter where your actual data lives.

[–] kokesh 6 points 2 months ago

I run Jellyfin on my thin client server, with movie library folders mounted feom my old dual bay Buffalo NAS. Works like a charm.

[–] [email protected] 5 points 2 months ago* (last edited 2 months ago)

Running the media streaming software on a separate machine is a good idea IF you need transcoding; ie, you need/want to translate the files into another format or a lower quality (for poor remote connections) on-the-fly before serving them to users.

If your clients can play the files just fine as-is, another machine doesn't really add anything except complexity.

[–] [email protected] 5 points 2 months ago (1 children)

If you can map a network drive (very east fstab edit BTW), then yes, its a great way to go.

That's what I do, I have two 5-bay NASs, both use all 4 uplinks (LAG) to my switch, and my media server is an LXC on an 8th gen intel, with GPU passthrough.

If you reboot your nas, you may need to reconnect from the server. If you reboot your server, you dont have to do anything since its connecting when it starts up. If you end up needing more space, you just mount that new NAS alongside it.

To me its the better approach.

[–] [email protected] 3 points 2 months ago (1 children)

Commenting just to add "nofail" to the fstab.

I didn't do this in Proxmox and then the drive stopped working and so did Proxmox. As a noob I ended up starting fresh and losing lots.

After adding nofail the services start up, just without the NAS attached. Without nofail it just doesn't boot.

Nofail for the win

[–] [email protected] 1 points 2 months ago

If its the only bit its doing or its critical for other services, yes, nofail is a good choice.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
LXC Linux Containers
NAS Network-Attached Storage
NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
SSH Secure Shell for remote terminal access
SSL Secure Sockets Layer, for transparent encryption
TLS Transport Layer Security, supersedes SSL

[Thread #937 for this sub, first seen 25th Aug 2024, 02:25] [FAQ] [Full list] [Contact] [Source code]

[–] just_another_person 1 points 2 months ago

Dep ending on your NAS ports, you may have a better direct connection over USB vs Ethernet, but otherwise no issues.

[–] irotsoma 1 points 2 months ago

Yeah, you definitely should run it on a separate machine. A home NAS itself probably shouldn't be doing anything beyond serving files and basic maintenance. Using them for too much will reduce their ability to serve data fast enough. Just be sure the media server and NAS have appropriate network cards, preferably gigabit, though even 100Mbit probably is enough for most of your network isn't already too busy, and ideally are connected to the same switch (again preferably gigabit) with good quality network cables.