this post was submitted on 28 Nov 2023
95 points (96.1% liked)

Linux

48008 readers
852 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
95
submitted 11 months ago* (last edited 11 months ago) by dtrain to c/[email protected]
 

What are some best practices in mounting NAS shares that you all follow?

Currently I am mounting using fstab to my user’s home directory with full rwx permissions, but that feels wrong.

I’ve read to use the mnt directory or the media directory but opinions differ.

My main concern is I want to protect against inadvertently deleting the contents of the NAS with an errant rm command. And yes I have backups of my NAS too.

Edit: this is a home NAS with 1 user on this Linux PC (the other clients being windows and Mac systems)

Would love to hear everyone’s philosophy! Thanks!

top 36 comments
sorted by: hot top controversial new old
[–] [email protected] 31 points 11 months ago (4 children)

I use systemd mount files instead of fstab, that way I can specify a network dependency in the off chance there's no network connection. Plus I can have other services like jellyfin depend on that mount file so it starts after the share is available.

[–] [email protected] 5 points 11 months ago

in fstab, there's a nofail option that I started using when mounting NFS and other disks that may be missing and I don't want to kill my bootup

[–] dtrain 2 points 11 months ago
[–] steel_moose 1 points 11 months ago (1 children)

Dipping my toes into this as well. Would you care to share the contents of your .mount unit file?

If I understand it correctly systemd generates unit files at boot using fstab 🤔. Probably not possible to specify the network dependency in fstab.

[–] [email protected] 2 points 11 months ago
#cat /etc/systemd/system/mnt-data.mount
[Unit]
Description=nfs mount script

[Mount]
What=192.168.0.30:/mnt/tank/Media
Where=/mnt/data
Type=nfs4

[Install]
WantedBy=remote-fs.target

The file name has to match the folder upon which the share is mounted with hypens instead of forward slashes

[–] [email protected] 17 points 11 months ago (1 children)

The NAS should be regularly backed up/snapshotted, so that even if you/a bad process deletes everything, you can restore it all quickly and easily.

[–] [email protected] 6 points 11 months ago (1 children)

A backup is an emergency protection, not a primary plan. This attitude is dangerously close to making the backup a critical part of their uptime.

[–] [email protected] 7 points 11 months ago

Having something rm your entire NAS is an emergency, not something that should be happening regularly. If it is, you've got bigger problems.

[–] [email protected] 8 points 11 months ago (2 children)

I'm curious, what file system do you use to mount your share? (SMB, SSHFS, WebDAV, NFS..?) I've never managed to get decent performance on a remote-mounted directory because of the latency, even on a local network, and this becomes an issue with large directories

[–] [email protected] 6 points 11 months ago (2 children)

Agreed on the latency issues. I tested SMB and NFS once and found them to be pretty much the same in that regard.

I'm interested to test iSCSI, as for some reason I think it might be better designed for latency.

[–] [email protected] 4 points 11 months ago* (last edited 11 months ago)

If you want the lowest latency, you could try NBD. It's a block protocol but with less overhead compared to iSCSI. https://github.com/NetworkBlockDevice/nbd/tree/master

Like iSCSI, it exposes a disk image file, or a raw partition if you'd like (by using something like /dev/sda3 or /dev/mapper/foo as the file name). Unlike iSCSI, it's a fairly basic protocol (the API is literally only 9 commands). iSCSI is essentially just regular SCSI over the network.

NFS and SMB have to deal with file locks, multiple readers and writers concurrently accessing the same file, permissions, etc. That can add a little bit of overhead. With iSCSI and NBD, it assumes only one client is using the file (because it's impossible for two clients to use the same disk image at the same time - it'll get corrupted) and it's just reading and writing raw data.

[–] [email protected] 1 points 11 months ago (1 children)

main thing to note is that NFS is an object based storage (acts like a share) where iSCSI is block based (acts like a disk). You'd really only use iSCSI for things like VM disks, 1:1 storage, etc. For home use cases unless you're selfhosting (and probably even then) you're likely gonna be better off with NFS.

if you were to do iSCSI I would recommend its own VLAN. NFS technically should be isolated too, but I currently run NFS over my main VLAN, so do what ya gotta do

[–] [email protected] 1 points 11 months ago

Yeah, there are a few limitations to each. NFS, for example, doesn't play nicely with certain options if you're using a filesystem overlay (overlays), which can be annoying when using it for PXE environments. It does however allow you to mount in several remote machines simultaneously, which I don't think iSCSI would play nicely with.

SMB though has user-based authentication built in, watch can be quite handy esp if you're not into setting up a whole Kerberos stack in order to use that functionality with NFS.

[–] [email protected] 5 points 11 months ago (1 children)

I've found that NFS gives me the best performance and the least issues. For my use cases, single user where throughput is more important than latency, it's indistinguishable from a local disk. It basically goes as fast as my gigabit NIC allows, which is more or less the maximum throughput of the hard disks as well.

A benefit of NFS over SMB is that you can just use Unix ownerships and permissions. I do make sure to synchronize UIDs and GIDs across my devices because I could never get idmapping to work with my NAS.

[–] [email protected] 1 points 11 months ago

idmapping

idmap only works with Kerberos auth, but iirc I didn't have to set anything up specifically for it. Though I've also never really had to test it since my UIDs match coincidentally, I just tested with the nfsidmap command.

[–] [email protected] 6 points 11 months ago (1 children)

How many users are there?

Is there a chance that the computer will boot without access to the NAS (aside from failure conditions).

Are you doing anything with ownership to prevent reading, or changing, sensitive files?

[–] dtrain 2 points 11 months ago (3 children)

This is a home NAS with one user (myself) on this Linux client. Other clients will be Windows for other users.

My NAS user has full rw permissions across the NAS shares (but not admin privs). I’m not super comfortable with this config as it strike me as too permissive to mount on the home directory. Would love to hear better approaches.

Yes, there is a chance the NAS can be down when booting the Linux pc.

[–] [email protected] 5 points 11 months ago

Well, with multiple users you'd need to decide what the use case is for the whole NAS and then work down from there.

Are you sharing everything in the NAS with everyone? In that case your NAS setup is fine, just a little permissive, because with RW to everything, the end users can break everything.

If it were me setting this up, I'd have different mount points for different users. 1 mount for each user that only they can read/write (not even you should be able to see it), and 1 mount that everyone can read/write, maybe if you want to go a little bonkers, 1 mount that everyone can read, but only you can write to.

Then you'd mount those three to separate mounts in your /media, and you can link them from your home directory for specific use cases.

Obviously this is completely overkill, but you can take the parts that sound appealing to you and ignore the rest.

[–] [email protected] 1 points 11 months ago

There aren't many options... you can either modify the share or you cannot. 🙂 Pick one.

[–] MasterBlaster 1 points 11 months ago

I set up the mount points in configuration as dynamic NFS volumes and added Bookmarks to nautilus. You can get to the volume either with cd command or right-click -> terminal here. You can shut down the NAS and only lose the share, which returns when the system goes online.

This is much better than WbDAV, which is fine for simple sharing or for devices that can't handle NFS easily like Android phones.

[–] [email protected] 2 points 11 months ago (2 children)

Mounting it in fstab is a bad idea... in home even worse.

Just make some desktop entries with the shares and that should be enough.

[–] [email protected] 8 points 11 months ago (1 children)

What's so wrong with fstab?

[–] [email protected] 3 points 11 months ago* (last edited 11 months ago) (1 children)

Well, for one, it's network attached storage. If it's not present in the network for one reason or another, guess what, your OS doesn't boot... or it errors during boot, depending on how the kernel was compiled and what switches your bootloader sends to the kernel during boot. Second, this is an easy way for malware to spread, especially if it's set to run after user logon.

[–] Scholars_Mate 15 points 11 months ago (1 children)

Well, for one, it's network attached storage. If it's not present in the network for one reason or another, guess what, your OS doesn't boot... or it errors during boot, depending on how the kernel was compiled and what switches your bootloader sends to the kernel during boot.

Just use nofail in the fstab.

Second, this is an easy way for malware to spread, especially if it's set to run after user logon.

If your fileshare is accessible to you, it is also accessible to malware running as your user. Mounting the share via a filemanager doesn't change this.

[–] [email protected] 0 points 11 months ago

Just use nofail in the fstab.

Really? Didn't know about this switch, thanks for the info ☺️.

If your fileshare is accessible to you, it is also accessible to malware running as your user. Mounting the share via a filemanager doesn't change this.

It does, it's not mounted on boot.

In general, mounting a netwok lication at boot is a bad idea in any OS, unless you know exactly what you're doing (all of the rigs that mount it are on a separate network, limited internet access through specific ports, none of them have users working them like daily drivers doing whatever on them - bascially, a server cluster is the only scenario that mounting a network share on boot makes sense). Why do you think that nowadays Windows users generally avoid mounting shares as network drives, but instead access them through shortcuts. The exact same reason, except in Windows, the share is mounted on logon (as far as I know, I might be wrong and the share might be mounted at boot, just reports that the share is missing when a user logs in). It's safer if the location of the share is not known at boot, period. When the user logs in and decides to copy something to the share (unknow period of time after the login), that is a different story. Sure, well written malware will find a way to replicate itself and infect other rigs even if you don't mount the share at boot, but at least you're shielding yourself from the badly written ones.

[–] Molecular0079 4 points 11 months ago (2 children)

I agree, for most cases just mount it via your File Manager of choice. If you're using it as a backing storage for another server, then that's a use case where fstab is fine.

[–] [email protected] 1 points 11 months ago (1 children)

I've found that Dolphin, at least, is much slower with network mounts than a CLI-based "mount".

[–] Molecular0079 1 points 11 months ago

Lately performance has improved dramatically. A year ago, it used to be about half-speed, but now it's basically on par with a CLI-mount.

[–] dtrain 1 points 11 months ago (2 children)

If I mount it in the file manager, how do I reference that location in the terminal to say do copy operations to it?

[–] [email protected] 2 points 11 months ago

It has to have a mount point somewhere. Just double click the desktop entry, that will mount it wherever you told it to and then you can copy to that location, easy peasy 😉.

[–] Molecular0079 1 points 11 months ago

Which file manager are you using?

In Nautilus, you can right click anywhere and click Open in Console, at which point it will open up a terminal leading to a gvfs mount directory.

In KDE, it is slightly more annoying because there's no right click option to quickly open it in terminal, but like gvfs, there's a mount directory that you can access at /run/user//kio-fuse-/smb/.

[–] [email protected] 2 points 11 months ago

If you do this, make sure you use snapshots, ideally taken automatically. You wouldn't want ransomware to overwrite the files on your NAS.

[–] TCB13 2 points 11 months ago
[–] [email protected] 2 points 11 months ago

I just mount the share in my file manager