Hi, mostly i use REHL based distros like Centos/Rocky/Oracle for the solutions i develop but it seems its time to leave…
What good server/minimal distro you use ?
Will start to test Debian stable.
I have been using Debian for about 20 years now. Server and desktop. But I recently migrated all my server stuff to FreeBSD and I don’t think I will move back. Jails are great and provide me a convenient way to isolate my apps. On the desktop side I will stay with Debian.
My first reaction to learning about jails was “why the hell doesn’t linux have these”.
Linux has this : https://wiki.archlinux.org/title/Systemd-nspawn
YES
YES
YES
You can check LXD too : https://wiki.archlinux.org/title/LXD And podman : https://podman.io/ Podman is docker without dockerd, and is more similar to what is bastillebsd to jails. https://bastillebsd.org/
You can’t go wrong with Debian
All my servers run debian and it’s going swimmingly. My daily driver runs bookworm with huge success
Bookworm is such a tremendously good release. I’ve been on Debian since Potato, and IMHO we are seeing the absolute best release they ever put out.
I would hope you could say that with every release.
“Blue is the absolute color” … why ?
I’ve used debian on and off since the late 90s, what stands out about bookworm? They’ve been mostly the same to me, not that that’s a bad thing.
I’m going to throw my support behind this one as well. I’m circling back to Debian after a long stint on Fedora on my primary machine. I’ve been running Debian 12 on my desktop for several weeks now and it’s been pretty great.
it is one version behind fedora in gnome releases, so I installed the latest gnome from the experimental repos and that worked pretty well. I don’t know if I would recommend that for anyone else, but it worked for me.
I have a few personal servers still running CentOS 7, but I will be migrating them to Debian slowly over the next few months. I suspect I will go fine. Debian organization to maintain FOSS ideals over the next 5 to 10 years, so it seems like a good default for me.
I have read about Vanilla OS. It is Debian based with some neat features stacked on top that might be fun for a desktop OS. I can see myself switching to that on the desktop if they deliver on all their promises.
Life long Debian (and Debian derivatives) user (23 years and counting). I have pretty much settled down into (this has been true for years):
- Debian for servers.
- Mint for workstations (that you want to just work and don’t want to spend time troubleshooting / tinkering). Mint is linux your grandma can use (my Boomer real estate broker father has been running Mint laptops for the last 5 years).
- Ubuntu for jr. Engineers who want to learn linux.
- Qubes (with Debian VMs) for workstations that must be secure (I’ve been working recently with several organizations that are prime targets either the CCP or have DFARS / NIST compliance requirements).
I can throw in a vote for Debian stable as well. I’ve recently installed Debian 12 and I’ve been blown away by how great it’s been compared to my recent Fedora 38 experience out of box.
What kind of hardware are you running it on? I’ve started using Debian for servers, but I’m still using Fedora for laptops, currently. I am always curious about different options.
This is my daily driver tower.
- i9 10850k
- ASUS TUF Gaming Z590-Plus
- NVIDIA GeForce RTX 2070 SUPER
I don’t use wifi however it did work out of the box. The only thing that required additional setup was the Nvidia card but the driver was available in the repos.
If you do end up testing it out on a laptop let me know how it goes. I have a Windows laptop lying around here somewhere that could use some love.
Will do ! It looks like your stuff is pretty recent. 13th gen seems to be a bit different for Intel because of the processor layout and I think requires a new kernel version and that’s what I apparently have
Debian.
Devuan over Debian for stability and speed.
My vote is Archlinux. Debian is sometimes a little too “optimisitic” when backporting security fixes and upgrading from oldstable to stable always comes with manual intervention.
Release-based distros tend to be deployed and left to fend on their own for years - when it is finally time to upgrade it is often a large manual migration process depending on the deployed software. A rolling release does not have those issues, you just keep upgrading continuously.
Archlinux performs excellent as a lightweight server distro. Kernel updates do not affect VM hardware the same they do your laptop, so no issues with that. Same for drivers. It just, works.
Bonus: it is extremely easy to build and maintain your own packages, so administration of many instances with customized software is very convenient.
I don’t think Arch (or any rolling distro for that matter) is the best solution for a server deployment. If you update rarely, you’re bound to have to do manual interventions to fix the update. If you update too often, you might hit some distro breaking bug and you’re rebooting very often as well. Those two options are not great on something requiring stability.
Once a year there is a manual intervention. Last one was the repo merge, and that did not even break then. Before that… hmmm… I dont even remember.
On Desktop with nvidia and a lot of other AUR stuff it is more work, but the servers run smooth as butter.
RHEL is designed to be the terminator: a bit outdated, but never stopping and never giving up until it’s completely destroyed.
Arch is a house that’s being built by a drunk tradie: everything is probably going fine, but you might end up with a front door that opens up to a solid brick wall.
The main benefit of arch is that it has a huge repo of cutting edge packages. That is pretty much completely useless for both development and infrastructure.
Devs don’t use cutting edge packages because that can introduce a whole lot of work for no benefits. So for example instead of installing node (cutting edge on arch), they use node-14-lts, just like their infra, until it stops getting support or a feature they need comes out in a newer lts version. And if your app is running on lts packages, you most certainly don’t need cutting edge system packages and all of the issues that come with them.
Debian is sometimes a little too “optimisitic” when backporting security fixes
You’re not going to be hacked because of a system package. It’s going to be a bad library, or your own bad code. Either way, it’s got nothing to do with pacman.
Release-based distros tend to be deployed and left to fend on their own for years - when it is finally time to upgrade it is often a large manual migration process depending on the deployed software. A rolling release does not have those issues, you just keep upgrading continuously.
We’re not back in the early 2000s, upgrading the OS is trivial when you’re using tools like terraform, ansible, and docker.
Bonus: it is extremely easy to build and maintain your own packages, so administration of many instances with customized software is very convenient.
Sure you can write a package for pacman and have it available on arch. Or you can write a guix package and have it available on any Linux distro. Or you can write a nix package and then run it on macOS as well. Windows being covered by both of these because of WSL.
I’ve recently had to write a package for both arch and guix, the guix one was a lot easier and the whole process was a lot smoother. Also you get nice features like transformations, allowing you to only modify the existing package instead of having to rewrite it.
Archlinux performs excellent as a lightweight server distro. Kernel updates do not affect VM hardware the same they do your laptop, so no issues with that. Same for drivers. It just, works.
I haven’t used it as a server distro, but it was my main desktop distro for the last ~4 years. It crashed every month or two, and failed to boot at least 3 times even with regular Syu’s. Before that I ran Mint for 2+ years. It never crashed, it never failed to boot. Other machines I wouldn’t update for months. mint had no issues with that and updated perfectly fine. Arch would often crap itself completely, fail to boot, I’d do a btrfs rollback and try again in a week or two. Sometimes that would be enough, other times I had to wait a bit more for shit to settle.
Arch has possible minor benefits, and a lot of possible downsides. It just doesn’t make sense to use it on a server, when you can take a rock solid foundation like Debian, and then build on top of it with nix/guix.
We use Ansible as well, it keeps all servers happily upgraded and all packages in working order - even the weirdest custom software instances. Nodjs is available as lts packages im arch and it, again, just works.
I have zero issues with upgrades on desktop and server except once last year when my old Core2Duo notebook I use in the kitchen did not suspend correctly for a whole week until the Kernel bug was fixed. (I ran linux-lts for a week, it was… smooth sailing).
During that time we had 3 failed migrations of old PHP software to the new Ubuntu LTS and were fighting almightly RHEL because it simply did not provide the packages the customer required - we are now running an Arch container on the RHEL box…
I know this discussion is a little bit like religion, and obviously luck and good circumstances play a role. We both speak from experience and OP can make their own decision.
You basically recommend to burn money.
Not because of Arch itself and its quality, but because you need to constantly monitor the mailing list for issues and you need to plan a lot more downtimes due to reboot. This is not gonna happen in businesses.
if you need reliable uptime you are in need of redundant servers and at that point you can just apply updates and reboot the servers concurrently
Businesses rely on stable server and applications. Stable in the sense of API/ABI stable. You want an application behave exactly the same on day one and on the last day before eol of the server OS.
Arch is pure chaos and it could completely change how things work and break commercial third party apps on that server on potentially every day. And you would not necessarily notice the error until its to late and your data is corrupted.
You don’t trow money at a your server infrastructure to get redundant servers to finally be able to use Arch somewhat stable. And why should a business not use that redundancy for an LTS distro to get even more stability and safety of operations.
Regardless of the distro, unless you use Kernel live patching (https://wiki.archlinux.org/title/Kernel_live_patching) you should boot a new Kernel when it is released by your distro with a security warning. Running unpatched old Kernels just for 100% uptime is not safe.
Oh and, I never had issues with Arch changing spontaneously - what event are you speaking of?
Regarding the kernel upgrades: Using the linux-lts package / kernel get’s you a pretty reliable setup.
You already figured it out. It’s Debian stable.
For server, Debian is great :) i use ubuntu 20.04 lts personally
Slackware. Its stable as a mofo.
I used to be Slackware user. Then I sold my soul to RedHat, then to Debian…
I just installed Slackware after reading your message to see what is new, here are my findings:
-
There is still no auto install. I had to manually configure a lot of things manually using a terminal based fdisk and setup.
-
The default package manager, pkgtool, does not have a default way to auto install packages from web (something like yum, apt, up2date). It only installs from your own HDD.
-
The other tool for managing packages, slackpkg, was not installed on my system by default.
-
The default configuration for X and KDE has problems on my system. I can see the mouse move then nothing.
I can understand why somebody would like to play around with this kind of system as a fun/entertainment/puzzle solving in their free time. On the other hand, if you plan to run some kind of microservices architecture on this, then I wish you best of luck finding a new job once you are fired.
Sounds like its not the tool for you. Thats fine. Could be for countless other people out there.
-
Damn slackers
We are everywhere!
Can’t really go wrong with Debian or Ubuntu server LTS
You can definitely go wrong with an Ubuntu server
How? I’ve run several for years with no issue. They’re as stable as a rock
snaps are pretty insecure.
Snaps are pretty terrible IMO, so I usually end up bootstrapping a custom Ubuntu image without snap for this reason (and others) for my cloud images. Definitely not general purpose though.
Why not just use Mint, which strips snap outfor you?
Mint doesn’t build cloud images as far as I’m aware.
*citation needed
Go to the snap site and try to find a security section that describes how snap packages are signed. You won’t be able to find it because it doesn’t exist, and they don’t highlight their own security vulnerabilities.
What I can cite is how this should work, for example how apt signs all packages by default
Note how in the above doc there’s a message
WARNING: The following packages cannot be authenticated! ... Install these packages without verification [y/N]?
That doesn’t exist in snap because snap does not authenticate downloads. It’ll just happily install something maliciously modified.
For all my non-compliant, non-supported hosts I started using Fedora CoreOS quite successfully.
If you package your applications as containers, you should have a very easy time with it. It’s based off ostree, which means a couple of things:
- immutable (so not easy to break, I guess?)
- atomic upgrades, which means you upgrade in a single step
- atomic and full rollbacks, which means if an upgrade breaks your host, you can rollback to the exact previous version booted simply by choosing it from grub
- still based on rpm, so you will still have a grasp of it, even though many things are completely different
- other benefits I forgot, I’m sure :)
All with the added benefit that once you go towards containers you can change your distro with minimal effort, so there’s that.
I don’t understand what’s happening at Red Hat. First they pull the codecs out of Fedora which is supposed to be a community distro so why are company lawyers involved? Now basically closing their source code. I mean technically not violating the GPL cause you only have to have your source available to your customers.
Not really. Any customer can share GPL code, after they get it. Red Hat can’t change that, if they use GPL. The issue is, from my understanding, that Red Hat can have some non GPL code to build the final product. So sharing the GPL code itself would not be enough to build a 1 to 1 binary compatible distribution.
At least at theory, because we don’ know all details yet. Imagine a situation like the Chrome browser vs Chromium.
Codecs were never legal to include, community distro or not. The RedHat lawyers told Fedora that, and Fedora removed them
Looks like this sort of thing: https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish
Void Linux. It just works.
It indeed just works despite being a rolling release, but still wouldn’t recommend it as a server. Desktop usage is what it’s meant for.
Have to also add to the voices recommending Debian stable. I’ve used it now for ten straight years after I stopped distro-hopping for my servers and desktop, and I cannot imagine using another distro. It’s incredibly stable, but the best part of Debian is the absolutely expansive repositories that even the Arch User Repository can’t beat. Very rarely do I ever need to use Flatpak (ugh) for packages, or look to add in new external repositories.
@americanwaste @bzImage
Honestly Ive had the inverse experience where the package I need is only in AUR and not debian repos, but at least we can agree that Flatpak and Snap are terribleexpansive repositories
That would be new for me. AFAIK Debian doesn’t have that many packages (compared to AUR or even nixpkgs (see https://repology.org/)). Regarding Flatpak: What packages do you need for a server with Flatpak? Desktop makes sense for me, but I haven’t yet had any use-case/package for server related software in Flatpak.
I switched from Debian to NixOS for servers, 3 years ago, as I think it’s easier to maintain long-term (after being on Debian on servers for years). A new install (after EOL Debian support) often is a little bit more hassle and requires a longer downtime in my experience (apart from the lack of reproducibility and declarativeness and the sheer amount of software packaged and configured in nixpkgs).
For my public-facing server, I use Debian Testing, since I haven’t had any major issues with it’s stability. Auto-upgrades usually work , although there were a few times I had to manually intervene on the latest name-change upgrade from Bookworm to Trixie. I usually don’t even log-in except every few months.
At home, where it will only affect me, and possibly my family dealing with me, if the whole O. S. crashes and has to be rebuilt from backups, I use Arch.