It’s definitely an interesting hypothetical. Some homelabs that I’ve seen run crazy enterprise gear and are certainly capable of running thousands of very small containers, while others are running repurposed consumer equipment or SBCs like Raspberry Pis with less computing power and RAM.
Of course, in a self-hosted or homelab environment, there would be little utility to running that many network or web services. It would be a neat experiment, though. Seems like the kind of thing that Linus Tech Tips would attempt.
Doesn’t need to be a “traditional” container. Modulo noisy-neighbour issues, wasm sandboxing could potentially offer an order of magnitude better density (depending on what you’re running; this might be more suited to specific tasks than providing a substrate for a general-purpose conpute service).
I mean, if you have around 17 million containers running services, maybe.
@BaldProphet
What’s the smallest container around? How much RAM would that take?
edit: FROM scratch let’s you run bare binaries on Docker.
Would be very interesting to see how far that could get. What sort of payload/task would be interesting for all those containers?
@Sandbag @bdonvr
It’s definitely an interesting hypothetical. Some homelabs that I’ve seen run crazy enterprise gear and are certainly capable of running thousands of very small containers, while others are running repurposed consumer equipment or SBCs like Raspberry Pis with less computing power and RAM.
Of course, in a self-hosted or homelab environment, there would be little utility to running that many network or web services. It would be a neat experiment, though. Seems like the kind of thing that Linus Tech Tips would attempt.
Doesn’t need to be a “traditional” container. Modulo noisy-neighbour issues, wasm sandboxing could potentially offer an order of magnitude better density (depending on what you’re running; this might be more suited to specific tasks than providing a substrate for a general-purpose conpute service).