this post was submitted on 13 May 2024
10 points (100.0% liked)

linux4noobs

1422 readers
9 users here now

linux4noobs


Noob Friendly, Expert Enabling

Whether you're a seasoned pro or the noobiest of noobs, you've found the right place for Linux support and information. With a dedication to supporting free and open source software, this community aims to ensure Linux fits your needs and works for you. From troubleshooting to tutorials, practical tips, news and more, all aspects of Linux are warmly welcomed. Join a community of like-minded enthusiasts and professionals driving Linux's ongoing evolution.


Seeking Support?

Community Rules

founded 1 year ago
MODERATORS
 

Hello I am seeking a simple solution to running a list of "chown -R" " commands in script.sh

It takes a long time to sequentially execute all of these chown commands recursively because the directories have so many files. I want to be able to tackle the root level directories in parallel to speed things up. I imagine there must be a simple way to do this while keeping the list of commands in a single file. xargs and some of the other things I saw online looked like bad fits or would be over engineering this problem.

all 16 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 7 months ago (1 children)

find <directory> -type f -print0 | xargs -0 -P 4 -n 500 chown

That should find every file in your directory recursively, pass it to xargs, which will then spawn up to four processes which will each call chown on up to 500 files, and it'll make additional processes as they finish.

In general though, if you regularly need to chown that many files, it's better to find a way to make sure they have the right ownership from the start.

[–] [email protected] 1 points 7 months ago (2 children)

Thanks for adding that tidbit at the end. The reason that permissions get out alignment is due to different non-privledged accounts (for saftey) will write or copy files somewhat regularly from outside of the main system. I am the furthest thing from a linux expert so maybe you would have a recommendation or better insight after explaining that? This necessitates changing the owner and permissions regularly, especially when I need to interact with the files adhoc and have to wait for my script to run and complete.

[–] nottelling 5 points 7 months ago (1 children)

If you have multiple users writing to a directory, you should be relying on groups, permissions, and sgid and not care who the owner is.

[–] [email protected] 1 points 7 months ago (2 children)

But what if user A in a new group creates dir "abc" - will dir "abc" automatically be set to the correct group? I would think the group permission would be just like the user permission, not set until manually set.

[–] [email protected] 2 points 7 months ago (1 children)

Yeah since I learned on Windows servers for 20 years, I'm struggling on permissions and groups in Linux in general.

In Windows it's as easy as enabling 'children inherit parent' and then the users can go and create whatever and if they can write, they'll write it with inherited from the parent permissions. If you change a folder deeper, you can unlink inheritance from the parent and then it could also optionally be the new parent for all children permissions.

I tried a couple of times to do this in Linux and I've always struggled due to my own lack of knowledge and understanding. I feel reading it I keep coming to the wrong conclusion too perhaps based on my experience and bias in reading it.

Anyway I know it's not helpful but I feel the struggle.

[–] [email protected] 2 points 7 months ago

Thanks for chiming in, im glad its not just me. I feel like i have a much stronger understanding on things more complicated tham groups! That makes it feel worse

[–] [email protected] 2 points 7 months ago* (last edited 7 months ago) (1 children)

I don't really understand your use case.

It sounds like you have multiple users creating files in a directory, and some users are creating them with more-restrictive permissions than you want -- like, you want to force them to make their stuff accessible by everyone else -- and you're trying to avoid that by regularly modifying all the permissions?

If you set the sgid bit on the parent directory, then by default, things created in that directory will inherit the group of the parent directory.

But a user can still change permissions so that that isn't the case.

It's possible that you could use ACLs or something like that to address your problem, but I don't know what it is that you're trying to achieve.

[–] [email protected] 1 points 7 months ago (1 children)

What you proposed with sgid sounds like it might be what i need. All of the users are controlled my me, it's just when they connect to the smb share of the main system from other devices, i figured it was good security to use an account that is separate from my main account on the system, so they can't access the entire system or execute sudo commands

[–] [email protected] 1 points 7 months ago

it’s just when they connect to the smb share of the main system from other devices,

If this is specific to a Samba server, it looks like you can set it to use whatever uid/gid you want.

https://unix.stackexchange.com/questions/530038/remap-uid-in-samba-share

[–] [email protected] 3 points 7 months ago (1 children)

Hrm, you might look into file ACLS.

https://serverfault.com/questions/444867/linux-setfacl-set-all-current-future-files-directories-in-parent-directory-to

serfacls is a command that lets you make user (or other) level permissions changes outside of the usual ownership semantics.
So you could for example do something like setfacl -d -R u:<your username>:rwx /the/very/top/directory/ That should make it so that newly created files and folders have a default acl allowing you access. Run it again with the m flag to modify existing files.
It'll take a minute to loop through everything, but you should only have to run it once so it's not a recurring issue.

I hope that gets you what you need. :)

[–] [email protected] 1 points 7 months ago (1 children)

facls are the shizzle. Seriously. I'm really not sure why people use chmod at all anymore. It's fewer characters, maybe?

For OP, a tool like fd can turn a script into a very short one-liner; and unlike find, it runs execs in parallel by default:

sudo -E me=$(id -un) fd . \<path> -t f -x setfacl -m u:${me}:rw '{}'
sudo -E me=$(id -un) fd . \<path> -t d -x setfacl -m u:${me}:rwx '{}'

will do the thing in parallel; the first line, for all the files; the second, for all the directories.

As others have said, if you're needing to do this a lot, it's best to fix whatever is setting the perms in the first place, or as @ricecake and others have said, set the perms/facls to be sticky so they get inherited.

facls are far more expressive than base perms, and are supported by every major, current, Linux filesystem. Not FAT, but ACLs on FAT FSes are all f'ed up anyway.

[–] [email protected] 2 points 7 months ago

My guess is that it's not "the standard" for managing file ownership stuff, since it doesn't manage ownership. As a result, they're shown less often in tutorials and tool output.
The ownership semantics still needs to exist and get managed, and so a lot of less sophisticated software will just check ownership, not it's actual ability to access.

Tools and capabilities come quick, but the ecosystem as a whole moves glacially slow. Often that's good, because it means user land APIs and programs don't often just fail for no good reason, which creates the stability that makes it popular and useful. It also makes it painful to get "new stuff" into widespread use, where "new" means less than 30 years old.
You see the same thing with selinux. It's fine now! But it's still scary. And we'll finally have btrfs as the standard in 2040 I'll wager.

[–] [email protected] 4 points 7 months ago* (last edited 7 months ago)

if it's bash, add & at the end of the chown lines (no ;). then put "wait" on a single line where it should wait for all to finish.

if you don't add "wait", the chowns could be kilked when the script finishes. to prevent that you have to redirect their error and standard output somewhere that exists even after the script finished (/dev/null or a logfile) and call "disown %+" to detach the processes from the parent.

[–] NegativeLookBehind 3 points 7 months ago* (last edited 7 months ago)

Job control/Process backgrounding sounds like what you want, if I’m understanding the situation correctly.

[–] TunaCowboy 1 points 7 months ago