this post was submitted on 21 Jul 2023
16 points (83.3% liked)

Linux

48008 readers
1007 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Hi all. I'm hoping to get some help from folks with more Linux experience than me. I'm not a Linux noob, but I'm far from an expert, and I have some huge gaps in my knowledge.

I have a Synology NAS that I am using for media storage, and I have a separate Linux server that is using that data. Currently the NAS is mounted with samba. it automatically mounts at boot via an entry in /etc/fstab. This is working okay, but I don't like how samba handles file ownership. The whole volume mounts as the user who mounts it (specified in fstab for me), and all the files in the volume are owned by that user. So if I wanted two users on my server to have their own directory, I would need to mount each directory separately for each user. This is workable in simple scenarios, but if I wanted to move my Lemmy instance volumes to my NAS, the file ownership of the DB and the pictrs volumes would get lost and the users in the containers wouldn't be able to access the data.

Is there a way to configure samba to preserve ownership? Or is there an alternate to samba that I can use that supports this?

Edit:

Okay, so I set up NFS, and it appears to do what I want. All of the user IDs carry over when I cp -a my files. My two users can write to directories that I set up for them that are owned by them. It seems all good on the surface. So I copied my whole lemmy folder over and tried to start up the containers, and postgres still crashes. The logs say "Permssion denied" and "chmod operation not permitted" back and forth forever. I tried to log into my container and see what is going on. Inside the container, root can't access a directory, which is bizarre. The container's root user can access that directory when I am running the container in my local filesystem. As a test, I tried copying the whole lemmy directory from my local filesystem to my local filesystem (instead of from local to NFS), and it worked fine.

I think this exact thing might be out of the scope of my original question, and I might need to make a post on [email protected] instead, as what I wanted originally has been accomplished with NFS.

top 12 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 1 year ago (1 children)

Did you specify the user and group ID in fstab? That might be what's causing global permissions.

Also, consider using NFS instead of SMB. Synology supports both and I've generally found NFS easier to work with (but I just run a simple home server)

[–] [email protected] 1 points 1 year ago (1 children)

I am specifying user and group in fstab, and everything mounted is owned by the user & group specified. But if I wanted another user to write to it, one that isn't in the group, it doesn't have access to write. The main issue is users in containers, as they can't just be added to a group. Or rather, it would be unnecessary complicated to add them to a group.

I will take a look at NFS and see if that fits my needs.

[–] [email protected] 3 points 1 year ago (1 children)

That's pretty much how SMB in general works, but (assuming synology supports it, I'm not sure) you can force privileges for the files at the server end. In your case that would pretty much mean rw privileges for everyone, so it's not ideal (security wise), but if your environment is suitable and that's a compromise you're willing to make it is possible. Also you could check if setfacl suits your needs.

And then of course NFS, but that has a tradeoff that if you need to access files with anything else than linux-box it's not ideal either, specially if you're after fine grained privileges over multiple systems.

[–] [email protected] 1 points 1 year ago

MacOS does NFS completely fine, and Windows apparently does up to NFS 3 fwiw. But SMB is definitely more widely supported (no problem running both at the same time though).

[–] [email protected] 4 points 1 year ago (1 children)

For what it's worth, NFS in my experience is also faster. I had a very similar use case (but QNAP instead of Sinology) and switched everything over to NFS and saw performance gain. Little things like previewing IP Camera security footage would feel slow on SMB, but snappier on NFS. I'd gotten over the user thing, but the speed is why I switched.

I did eventually wipe QNAP's software in favor of stock Debian -- but the prevailing wisdom seems to say Sinology's OS is pretty good.

[–] [email protected] 1 points 1 year ago

I can also confirm this being my experience. I probably didn't tune samba correctly or something, but when browsing my NAS via samba it regularly took ~1 second per folder navigation, whereas NFS was instant. I didn't care enough to figure out why, so NFS is what I use.

[–] [email protected] 3 points 1 year ago (1 children)

Samba always uses exactly one user (the one whose permissions you logged in with). NFS does what you want.

[–] [email protected] 2 points 1 year ago (1 children)

Thank you! I will take a look at NFS later tonight when I have some time.

[–] [email protected] 3 points 1 year ago

Note that having any kind of real authentication with NFS (other than "limiting client machines by IP and then trusting them to report the correct user", which might be fine for your local network) and also encryption requires Kerberos. It's not the end of the world to set up (I have it in my local network) but it is more involved than setting up Samba accounts.

[–] NoRecognition84 2 points 1 year ago* (last edited 1 year ago) (1 children)

If your Synology NAS supports ssh, might want to check to see if you can use sshfs. I used to use Samba and NFS on my Debian home server, but switched to sshfs a few months ago. File transfers seem a little quicker than with Samba.

[–] [email protected] 1 points 1 year ago

SSHFS has a lot of overhead from FUSE as well as the encryption. It's much better to use NFS on the LAN if you care about speed.

[–] [email protected] 1 points 1 year ago

Fstab is the way