this post was submitted on 19 Jul 2023
35 points (97.3% liked)

Linux

8193 readers
134 users here now

Welcome to c/linux!

Welcome to our thriving Linux community! Whether you're a seasoned Linux enthusiast or just starting your journey, we're excited to have you here. Explore, learn, and collaborate with like-minded individuals who share a passion for open-source software and the endless possibilities it offers. Together, let's dive into the world of Linux and embrace the power of freedom, customization, and innovation. Enjoy your stay and feel free to join the vibrant discussions that await you!

Rules:

  1. Stay on topic: Posts and discussions should be related to Linux, open source software, and related technologies.

  2. Be respectful: Treat fellow community members with respect and courtesy.

  3. Quality over quantity: Share informative and thought-provoking content.

  4. No spam or self-promotion: Avoid excessive self-promotion or spamming.

  5. No NSFW adult content

  6. Follow general lemmy guidelines.

founded 1 year ago
MODERATORS
 

Sorry, noob here. I have been using Linux for a decade at least, but some basic stuff still stump me. Today, it's file sharing: The idea is that the server is good at CPU and the NAS is good at storage. My NAS does run Docker but the services are slow; and my server runs a bunch of Docker containers just fine but has limited disk space (SSD).

Want:

  • Share a directory on my NAS, so that my homelab server can use it.
  • Security is not important; the share does not need to be locked down.

Have:

  • Server+NAS are on their own little 1Gb Cisco switch, so network latency should be minimal.
  • Linux NAS and Linux server have separate users/UID/GID.

Whatever I try, it always ends up with errors about 'access denied' or read-only or something. I conclude that I am not smart enough to figure it out.

Help?

you are viewing a single comment's thread
view the rest of the comments
[–] manwichmakesameal 8 points 1 year ago (2 children)

I'm 100% sure that your problem is permissions. You need to make sure the permissions match. Personally, I created a group specifically for my NFS shares then when I export them they are mapped to the group. You don't have to do this, you can use your normal users, you just have to make sure the UID/GID numbers match. They can be named different as long as the numbers match up.

[–] marche_ck 2 points 1 year ago

True Another possibility is the permission settings on the mount point of the nfs volume on the server.

[–] PlutoniumAcid 1 points 1 year ago (1 children)

make sure the UID/GID numbers match

But how? Can I change the numbers?

I totally get that Linux is by design a multi-user system, but it is frustrating to deal with when I am the only person to ever work with these machines. I know that my docker user+group is 1038/66544 but most docker commands require sudo so I am not even sure those values are the right ones. It is so non-transparent what ID's are in effect, for what commands --and on what machine!-- when I am just me.

[–] manwichmakesameal 1 points 1 year ago (1 children)

Basically when you make a new group or user, make sure that the NUMBER that it's using matches whatever you're using on your export. So for example: if you use groupadd -g 5000 nfsusers just make sure that whenever you make your share on your NAS, you use GID of 5000 no matter what you actually name it. Personally, I make sure the names and GIDs/UIDs are the same across systems for ease of use.

[–] manwichmakesameal 2 points 1 year ago* (last edited 1 year ago)

Also, to add to this: you're setup sounds almost identical to mine. I have a NAS with multiple TBs of storage and another machine with plenty of CPU and RAM. Using NFS for your docker share is going to be a pain. I "fixed" my pains by also using shares inside my docker-compose files. What I mean by that is specify your share in a volume section:

volumes:
  media:
    driver: local
    driver_opts:
      type: "nfs"
      o: "addr=192.168.0.0,ro"
      device: ":/mnt/zraid_default/media"

Then mount that volume when the container comes up:

services:
  ...
  volumes:
        - type: volume
        source: media
        target: /data
        volume:
          nocopy: true

This way, I don't have to worry as much. I also use local directories for storing all my container info. e.g.: ./container-data:/path/in/container