I’m looking for some feedback on my Plex system architecture.

All my media is stored on a Synology DS 1621+, six 4 TB drives in RAID 6 with one acting as a hot spare. All four network ports are bonded into a 4G link to an Ubiquiti USW-48-POE.

Previously, I ran Plex in a Docker container on the NAS. This setup was stable; however, the NAS only has 4 GB of memory shared between Plex, several other Docker services, and regular DSM overhead. Plus, the processor is not very powerful (AMD Ryzen V1500B, ~5400 PassMark).

A few months ago I repurposed some old desktop PC parts to build a home lab Proxmox server (Core i7-6700K [~8900 PassMark], 32 GB memory, GTX 970, an old 2.5” SATA SSD for guest OS disks, 1G networking on the motherboard). I’m running Plex on an Ubuntu VM, with the GPU passed through directly to the guest OS. Plex is not containerized in Ubuntu. The VM has 8 CPU cores and 8 GiB memory (different units in Proxmox). My Plex media is accessed via a persistent NFS mount in Ubuntu (had been SMB before a DSM update broke something and the VM could no longer read the directory contents.)

The main purpose of the change from NAS to VM was to utilize the increased CPU/GPU horsepower and memory that I had lying around, but I worry that the added layers of complexity (hypervisor/VM, PCIe pass through, NFS mounts) will introduce more opportunities for performance issues. I have noticed more frequent hiccups/buffering/transcoding since the change but I’m not sure if it’s related to my setup or if those issues lie with client devices and/or the files themselves (e.g. weird file container type that the client can’t play natively).

Any critique or recommendations on system architecture? Should I get a dedicated NIC to pass through to my VM? Dedicated NVMe drive passed through as a guest OS disk? Ditch Proxmox altogether and go back to Synology Docker container?

  • liara@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    I’m kind of over the whole idea of keeping “pets” around to serve my various self-hosting needs. Why make a hypervisor and then shard off pieces of this and create multiple operating systems that need to be maintained when you can just orchestrate all your hosting needs with a container orchestrator like k0s/k3s on the host? Even GPU passthrough can be done.

    I’m a bit biased because I’m also a CKA, but I was a die-hard “bare metal or bust” kinda person with my self hosted stuff until I discovered kubernetes. K8s is a lot of resources on its own but a distro like k0s really pares down on the minimum requirements and basically just becomes a more featureful version of Docker if you just run it as a single node.

    Eventually I came to understand that when your entire home stack is represented by a few hundred lines of yaml and a couple directories of portable data, that you can stop coddling the Linux install and just use the applications.

    • keen1320@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I apologize for my ignorance when it comes to Kubernetes - I sort of wrote it off as complete overkill for a home lab when my very basic understanding was that it was essentially a load balancer. After some light research, I’m beginning to understand that it could be a better solution than a full-blown hypervisor.

      If I understand your comment correctly, you’re suggesting to simply run a lightweight distro and install k0s or k3s to run containers? What would be an ideal bare metal OS for this? What would be pros/cons to k0s vs k3s in a home lab environment, or is that simply a matter of personal preference? What would be the best way to connect to my media - SMB, NFS, something else? Or are the differences here irrelevant? Any concerns (permissions, IO latency) when passing an NFS mount from host into a container, or is there an even better way to do something like that entirely within the container?

      • liara@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        10 months ago

        A complete kubernetes cluster for a homelab probably would be overkill (unless you really wanted a kubernetes playground, which some folks do). However, yes, my recommendation these days would be k0s directly. I did use k3s up until recently but gave k0s a shot and realized it’s a bit lighter on resources, more configurable (for instance you can choose to run cri-o instead of containerd, which isn’t an option with k3s) and has some extra features like letting you put helm charts with their values directly in the k0s config.

        k0s vs k3s just comes down to personal preference but for me it came down to:

        • I disabled a lot of features of k3s out of the box (disabled flannel for calico, used nginx ingress instead of traefik). K0s feels a little less opinionated – it doesn’t include quite as many batteries during initialization, but this doesn’t bother me because I have my own preferences for how to handle certain aspects of my stack
        • both can use sqlite as the data backend (and both will by default in single node mode), which is the much less resource usage than using etcd as the data store
        • I find k0s uses a couple hundred MB less RAM for the control plane components (about 700mb vs 1g for k3s)
        • less constant cpu usage from the API server
        • both have good documentation for their specific features and of course kubernetes itself is extremely well documented, which is “language” used to define the services and pods

        As for distro to run it on, I use MicroOS myself (immutable os, I have it set to automatically update and reboot once a week), but Debian is my second choice and my personal preference for server distros. The beauty of this setup is the container host really just needs the bare minimum to run the containers. There’s less that can break because the containers are all managed by others upstream so the main concern of breakage areas basically becomes did server boot and did k0s start?

        NFS is fine and actually natively supported as a kubernetes volume type: https://kubernetes.io/docs/concepts/storage/volumes/

        Your option could be to mount it directly to the host first and then use a hostPath to mount it to the container, or just mount the NFS path directly to the pod. As for permissions you may need to do some mapping, but kubernetes also has security contexts that can let you alter the UID of the user running the pod. If you need user to be privileged and root, you can do that or if you need UID 5124 you can do that too.

        If your goal right now is a Plex server and not much else to start then this makes things very easy:

        • spin up k0s
        • add a Plex pod/manifest
        • add a service type of NodePort and expose the Plex service on a static node port of 32400 (we are lucky that Plex falls into the NodePort service range by default)
        • the GPU passthrough I admit will take some work, but it should be doable

        You can add nginx ingress, cert-manager, metal lb, etc later on down the line if you get curious and want to expand a bit (sonarr, radarr, adguard home, etc)

        You could also just go full stupid with kubevirt, but it’s not a project I’ve personally explored using. Iirc it basically allows for the provisioning of more persistent VMs with k8s rather than containers

        • keen1320@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Again, pardon my ignorance when it comes to Kubernetes. Why would I use something like k0s instead of just regular old Docker? I suspect PCIe passthrough will have similar challenges on both k0s and Docker, whereas on Proxmox it’s been relatively painless.

          This might be better suited for a different community, in which case I’ll make a post where appropriate. I’m not familiar with some of the Kubernetes terminology - batteries, pod/manifest (is this similar to stacks/docker compose?), NodePort?

          • liara@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            10 months ago

            I mean you could just use docker and if all you want is a plex container, it may be the way to go. Kubernetes is definitely a lot to learn if you are kind of hesitant to get started with it in the first place. I would just say that single binary distribution like k0s just basically becomes a docker on steroids when used in a single node environment. I’ve become so familiar with k8s that going back to docker feels like a massive downgrade for anything but a simple and straightforward task (which a single plex container, admittedly is).

            Just do yourself a favour and if you go that route, at least use docker-compose to template your container. Searching through your bash history to find the command you used to start the container is just a recipe for frustration waiting to happen.

            One major feature that k8s has which docker doesn’t (I mean, there are lots tbh) is the ability to use helm charts. These are basically install templates, if you don’t like the defaults then you can provide your own (assuming the chart author has written good charts) and the helm chart will template your values into the default chart and spin up a bespoke version for you.

            For instance, a theoretical helm chart whose purpose is to install qbittorrent would likely provide the following:

            • the manifest to run a version of qbittorrent
            • a cluster IP to expose the plex web port internally
            • an “ingress” object to connect your nginx frontend to the qbittorrent web port so that you can go to mydomain.com/qbittorrent and qbittorrent appears
            • a volume mount to store your data in

            However, say this ingress by default doesn’t use SSL, but you want to ensure you’re using https when you enter the password on your web interface, but the rest of the chart defaults meet your needs – then a well written chart would allow you to provide values that let you template the SSL setup for the ingress object and provide for cert-manager to go and provision some certificates for your provided hostname with Let’s Encrypt.

            Helm charts basically are a way to provide a sane set of defaults which can be extended and customized to personal needs.

            k8s (or the lightweight cousins) may not be ideal for you and, as I said, I’m biased because I’m a certified k8s admin, so tinkering with k8s resembles something like fun for me. Your mileage may vary :)

            Terms:

            • batteries: this isn’t a kubernetes term, just a figure of speech. You got a new toy and it came with batteries – you didn’t have to supply them yourself. The batteries were included (i.e. the installation was opinionated and came with a pre-existing notion of how you should use the application). I wrote that last comment from my phone, so I may have misused the term in an attempt to get a response written on my phone
            • pod: a collection of containers running in tandem – for instance, if you had nginx and plex running in the same pod, then plex can find nginx ports as if they were sharing the same machine (127.0.0.1 is the same for both containers). If you had nginx and plex running in different pods, then you would need to use a service to allow them to communicate with each other (which they could easily do with the cluster’s dns service)
            • manifest: a yaml file containing the spec of your containers (name, container port, image to use, volumes to mount). This would basically equate to a docker compose file, but manifests can also define services, namespaces, volumes, etc. In that regard a manifest is a yaml file that defines an object in kubernetes
            • nodeport: a type of service. This one will direct traffic to a given pod and potentially open that service to external services. By default nodeports are given in the 30000-32767 range and will bind to the hosts network interface. The result is that the service becomes available on a port externally. The other service type is clusterip which only assigns an IP which can be accessed internally (i.e. by other pods/services in your cluster – for instance if you had a mysql service you wanted to expose to a web/application pod but not let the world access it, then clusterIP lets you do this). The final service type is loadbalancer which is a bit more complicated (TLDR, these are frequently integrated with cloud providers to automatically spin up actual load balancer objects, for instance, at AWS, but can also be used to bind services to privileged ports on your external IP while leveraging something like “MetalLB”)
      • keyez@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Kubernetes in any form is overkill for a homelab. Especially since you wouldn’t want to stand up k8s on the synology it means you’re running it on the separate node and still messing about with NFS or specific mounts which just adds complexity.

        • keen1320@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          What are your thoughts on TrueNAS Core or Unraid instead of Synology? I could still run Plex on the same hardware that handles the storage while maintaining the freedom and flexibility that my current home lab server provides. There appears to be plenty of decommissioned enterprise-grade hardware being sold on FB all the time.

          • keyez@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            I run unraid on my homelab with 3 VMs and 12 docker containers and having the fewer abstractions the better. I have no complaints with unraid but went with it over TrueNAS since I have mismatched disk sizes.

            I also run TrueNAS for my lukewarm backup server for unraid and 1 extra mount and have no complaints but scales’ jails and kubernetes backend for docker seemed not a great fit for what I wanted but I’m sure others love and would prefer that setup.

  • keyez@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    There should be some troubleshooting you can do, first make sure the GPU is being utilized by plex, check metrics and stability on the NFS mounts to see if there are spikes or if tuning is needed. That setup sounds like it should work and a direct connection between the two hosts would be even better.

    Though I would recommend going back to basics and just plex docker container directly on synology.