Hello I am seeking a simple solution to running a list of “chown -R” <mydir>" commands in script.sh

It takes a long time to sequentially execute all of these chown commands recursively because the directories have so many files. I want to be able to tackle the root level directories in parallel to speed things up. I imagine there must be a simple way to do this while keeping the list of commands in a single file. xargs and some of the other things I saw online looked like bad fits or would be over engineering this problem.

  • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    facls are the shizzle. Seriously. I’m really not sure why people use chmod at all anymore. It’s fewer characters, maybe?

    For OP, a tool like fd can turn a script into a very short one-liner; and unlike find, it runs execs in parallel by default:

    sudo -E me=$(id -un) fd . \<path> -t f -x setfacl -m u:${me}:rw '{}'
    sudo -E me=$(id -un) fd . \<path> -t d -x setfacl -m u:${me}:rwx '{}'
    

    will do the thing in parallel; the first line, for all the files; the second, for all the directories.

    As others have said, if you’re needing to do this a lot, it’s best to fix whatever is setting the perms in the first place, or as @ricecake and others have said, set the perms/facls to be sticky so they get inherited.

    facls are far more expressive than base perms, and are supported by every major, current, Linux filesystem. Not FAT, but ACLs on FAT FSes are all f’ed up anyway.

    • ricecake@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      My guess is that it’s not “the standard” for managing file ownership stuff, since it doesn’t manage ownership. As a result, they’re shown less often in tutorials and tool output.
      The ownership semantics still needs to exist and get managed, and so a lot of less sophisticated software will just check ownership, not it’s actual ability to access.

      Tools and capabilities come quick, but the ecosystem as a whole moves glacially slow. Often that’s good, because it means user land APIs and programs don’t often just fail for no good reason, which creates the stability that makes it popular and useful. It also makes it painful to get “new stuff” into widespread use, where “new” means less than 30 years old.
      You see the same thing with selinux. It’s fine now! But it’s still scary. And we’ll finally have btrfs as the standard in 2040 I’ll wager.