Software for Linux is a lot more complicated than, say, software for MacOs. Instead of having one simple binary that you update for every version of the operating system, you have hundreds of binaries for many Linux distros, with different package managers, formats, init systems, and even userlands. Many people see this as an issue, and try to use a universal package manager, and then they fight about which universal package manager to use. But, I am here to propose, that not only is the fragmenting of Linux not an issue, universal package managers are unneeded and inefficient.
I think all of these arguments are really bad, and I’m someone who hates universal package managers.
“Fragmentation is not an issue”
t seems, they all agree that all of the different packaging formats and managers are a problem. However, is it really so?
Well, duplication of effort is always a downside.
As a developer, by simply using a free licence, you can just sit back and let all of the distros build binaries and do all of the work for you.
The whole complaint being made is that this doesn’t always happen in a timely fashion, and even when it does, it requires a lot of work to be done by each of those distributions.
“There is no need for universal package managers”
Yet, there is a universal package manager that has been around longer than even traditional package managers. BUILDING FROM SOURCE! Many people forget that all of their software is a git clone, make, and make install away from being installed.
I wish it were that simple. In practice, most projects are much harder to build than that. Many use build systems other than plain make such as CMake or Meson and Ninja or GNU autotools (and every project that uses autotools has different levels of intermediate files committed so different commands are needed to build it), and you’ll need to install whatever bespoke build tools they have. I almost always run into arcane error messages that can give me a lot of trouble even as an experienced Linux user and programmer. This is especially true if you’re on a distribution (like anything Debian-based, in other words most newbie distros) where header files are in separate packages, so trying to build anything will give you errors as if you have nothing installed.
A story I always share when this comes up is of my GTK patch that fixed a GObject Introspection annotation (affects generated bindings for other langauges). I spent twelve fucking hours trying to compile GTK and failed. I gave up and submitted the patch without having seen a successful build (it got accepted).
Again, I am an experienced Linux user and programmer. If even I have so much trouble compiling programs from source, expecting anyone who doesn’t have my skill set to do it is crazy.
“Universal package managers are inefficient”
Flatpak, Snap, and AppImage are just not as fast as conventional package formats. Try using any modern version of Ubuntu, and just see how slow their Snaps are.
I have never noticed them being particularly slow, either to install or to run, though I can’t comment on Snap specifically as I’ve only used Flatpak and AppImage.
But, that isn’t really even the worst part. Because of the nature of universal package managers, they require much more space than traditional packages. Every single app, instead of sharing the dependencies of all other apps on the system, is bundled with all of its dependencies. This can add gigabytes of space to many apps, and slow down older HHD’s.
I mean, sure, reducing space requirements is noble, and universal package managers probably take up a little more space (I haven’t analyzed it myself). But it’s far from a chief concern in a day where even low-end drives have hundred of gigabytes of capacities. And as for " instead of sharing the dependencies of all other apps on the system" - blaming static linking is a serious mistake. The space impact of static linking is not a large cost and it easily makes up for it with its advantages in simplicity and reliability. I would blame dynamic linking for a lot of the headaches we have with packaging and compilation. Dynamic linking introduces the need for complex dependency resolution algorithms, tying each executable to a huge amount of environment it has to carry around in order to work, breaking the portability of programs and crowding your package manager output with obscure libraries you’ve never heard of and shouldn’t have to.
I think all of these arguments are really bad, and I’m someone who hates universal package managers.
“Fragmentation is not an issue”
Well, duplication of effort is always a downside.
The whole complaint being made is that this doesn’t always happen in a timely fashion, and even when it does, it requires a lot of work to be done by each of those distributions.
“There is no need for universal package managers”
I wish it were that simple. In practice, most projects are much harder to build than that. Many use build systems other than plain make such as CMake or Meson and Ninja or GNU autotools (and every project that uses autotools has different levels of intermediate files committed so different commands are needed to build it), and you’ll need to install whatever bespoke build tools they have. I almost always run into arcane error messages that can give me a lot of trouble even as an experienced Linux user and programmer. This is especially true if you’re on a distribution (like anything Debian-based, in other words most newbie distros) where header files are in separate packages, so trying to build anything will give you errors as if you have nothing installed.
A story I always share when this comes up is of my GTK patch that fixed a GObject Introspection annotation (affects generated bindings for other langauges). I spent twelve fucking hours trying to compile GTK and failed. I gave up and submitted the patch without having seen a successful build (it got accepted).
Again, I am an experienced Linux user and programmer. If even I have so much trouble compiling programs from source, expecting anyone who doesn’t have my skill set to do it is crazy.
“Universal package managers are inefficient”
I have never noticed them being particularly slow, either to install or to run, though I can’t comment on Snap specifically as I’ve only used Flatpak and AppImage.
I mean, sure, reducing space requirements is noble, and universal package managers probably take up a little more space (I haven’t analyzed it myself). But it’s far from a chief concern in a day where even low-end drives have hundred of gigabytes of capacities. And as for " instead of sharing the dependencies of all other apps on the system" - blaming static linking is a serious mistake. The space impact of static linking is not a large cost and it easily makes up for it with its advantages in simplicity and reliability. I would blame dynamic linking for a lot of the headaches we have with packaging and compilation. Dynamic linking introduces the need for complex dependency resolution algorithms, tying each executable to a huge amount of environment it has to carry around in order to work, breaking the portability of programs and crowding your package manager output with obscure libraries you’ve never heard of and shouldn’t have to.
http://harmful.cat-v.org/software/dynamic-linking/