Someone recommended it for keeping my containers up to date automatically. I checked out the repo and it seems too good to be true. It just updates your containers when a new image is available and everything just works out of the box? I’m a bit scared of just leaving it alone in case it might break something. The fact that it doesn’t come with a gui also scares me a bit.

Does anyone here use it and can recommend it? Any horror stories?

  • CactusBoyScout@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I just had a strange issue with Watchtower where it somehow failed to update itself. And it left a running but unhealthy duplicate of itself. Just restarting the old container fixed it. But I guess that’s a risk?

  • Tangbuster@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Used Watchtower on my Synology for a while and it worked well. No issues in that time.

    Now I’ve moved to a Nuc and am more experienced with Docker and understand a lot more of it but by no means am a professional by any means, I would say that I wouldn’t use Watchtower. I can definitely see it messing a config up and prefer not to deal with the headache of troubleshooting something without knowing it was an auto update. If I had the time, I may tag the apps I’m happy to auto-update but for now I prefer to have the higher availability.

  • Calm-Size-1110@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    You can set notifications so you know which container are updated recently. If that container stops working, then just revert to previous image.

    And configure when watchtower should run the update. I set mine to update at 8pm, so in case something breaks, I still have a few hours before bedtime to fix it.

  • Simon-RedditAccount@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Yes, there are risks:

    • First, updates can break things. Already explained here.
    • Second, exposing Docker socket to Watchtower means you have to trust it ultimately. Any vulnerability in WT can lead to whole system compromise.

    Personally, I use DIUN. It just sends me notifications about available updates. I update things manually later. My system is pretty well isolated from outside world, so no need to hurry.
    On a VPS, I would prefer a different approach though.

  • roycorderov@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I got my proxmox in production and I’ve installed before whatschtower and just broke me down 4 containers with bad updates so I stoped from using whatschtower…

    I would like any services that just notify me about any new docker image update whitout making any updating

  • ProbablePenguin@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Watchtower itself works great, it doesn’t need a GUI for what it does.

    But updating containers in general, either manually or automatically, always carries a risk of something breaking due to the new update.

    One thing you can do is make sure you’re not using :latest tags in your compose files, and instead pin major versions like postgres:13

    And of course make sure you have backups going back multiple points in time in case something does break, and test those backups!

  • Cobthecobbler@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I don’t know if maybe I’m using watchtower wrong but i don’t like how it behaves by default. It’s always messing up my container names, not removing old containers and just spinning up new ones, etc and there’s no interface so I can’t view jobs that ran overnight, or see what’s queued or in progress. It just does it’s own thing and I really don’t like that. I’ve installed and un-installed it a few times and it’s been un-installed for a while now. I just redeploy my portainer stacks and pull down the latest image manually when I want to upgrade my containers now. At least I get some control

  • davidht0@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    I’ve been using watchtower for more than a year on all my containers and no issues so far. I have read many warnings against automating the updates, but it has never broken anything in my case. I’m talking about 3 VMs (on Proxmox) and 2 Synology boxes. 5 instances of watchtower keeping a total of 84 containers updated.

    Nonetheless I try to play it on the safe side and make daily backups in case something breaks. I’ve had a couple of containers breaking (nothing related to watchtower, AFAIK) and I have recovered easily restoring the latest backup.

    • Byolock@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Using Watchtower for approximately 2 years on about 20 Containers. I had 1 issue, where a container would not start after the update. The Error Message said I had an unsupported entry in the configuration file of the app. I looked up the changelog of that app, and found out that the option was removed and replaced by something else. Had to change one line in the configuration. Not really a problem for me.

      Though I decided to exclude my Home Automation Container and my kasm container ( my gateway to my network, a bit like guacamole ). Those may pose problems if they are offline unexpected.

      • davidht0@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago
        • 3 VMs in Proxmox hosting 70 containers get backed up everyday with ProxmoxBackupServer (VM in my primary NAS) to a NFS mounted folder on my primary NAS
        • Primary NAS (with 7 containers) gets backed up with Snapshot replication to my secondary NAS everyday.
        • Secondary NAS (with 7 containers) gets backed up with Snapshot replication to my primay NAS everyday.
        • And once a month I backup my primary NAS (not the whole thing,only the important folders) to a USB drive that I store at a friends house.
      • GolemancerVekk@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        Please take such advice with a large grain of salt. OP’s experience is very much not the norm. Especially for more complex apps like Jellyfin or Nextcloud, it’s almost guaranteed you’ll break them if you just update blindly.

  • SillyLilBear@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    The latest version isn’t always the best version. In a home lab or home network, this is rarely a big problem, but in a production environment, I wouldn’t recommend it.

  • thekrautboy@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    As example, some software pushes out updates that can (and sometimes will) break your setup.

    Of course nobody pushes out something like that on purpose to mess with users. But mistakes happen all the time. And even if the dont, some version upgrades require the user to take manual steps, when these are ignored and with something like Watchtower just blindly upgraded, setups can and very likely will break.

    Imo its not worth the very short amount of time saved by automatic-updates versus the amount of time it costs to fix such a mess when it occurs.

    For example, NPM (Nginx Proxy Manager) had a update months ago that broke many users setups. They of course did warn about this in the changenotes, but i remember people here on sub saying “well damn i used watchtower and it updated npm overnight and i wake up and nothing works anymore, took me hours to figure out the reason and fix it”.

    https://github.com/NginxProxyManager/nginx-proxy-manager/releases/tag/v2.10.0

  • Old-Satisfaction-564@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    I prefer to be there when container ar updated so that I can promptly fix anything that breaks.

    I have 2 watchtower instances in a docker-compose, the first container ‘watchtower-monitor’ uses command: --monitor-only and warns me over gotify about the availability of updates but does not modify anything, the second ‘watchtower-once’ uses command: --run-once and it is usually inactive since it performs all updates once and than exits. When i am ready to update everything I just docker-compose start watchtower-once container to start the updates.

  • azukaar@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    Using an outdated version of a container (including DBs!) that have known vulnerabilities that will be very easy to exploits including by bots, is so much worse than the risk of a container breaking after an update. Just monitor your server properly and you’ll be good

  • gohankr@alien.top
    cake
    B
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    If you want highly available system, then you should perform updates with a custom made script, where you can control update issues. Otherwise watchtower is good.

    • MRobi83@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Curious how a custom script to perform the update would be different than watchtower doing it? Is an automated update not an automated update regardless of what triggers it?

        • zoommicrowave@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Great script! Only thing I can recommend is adding a “docker image prune -af” command after all compose files have had new images pulled and are up…unless you want old images taking up hard drive space/you have a valid reason for keeping old images.

        • Independent_Till5832@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          If you want to have zero downtime, you can scale the container to two and if everything succeeded just kill the old container. (Need reverse proxy with balancing like caddy)

      • Grosaprap@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        You understand that at the point where they open source it and publish it it would be essentially watchtower right? The point of having a custom-made script is so that you can customize it to your specific needs if it’s a generalized item then just use watchtower.

  • AnderssonPeter@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    I use it but only on containers where I can configure it to not do major updates, sadly most images don’t have the needed tags for this 😢

  • zoredache@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    There are risk, that a newer version of an image will accidentally, break things, apply breaking changes and so on.

    Good, frequent, tested backups, could be a mitigation to this. If an image breaks, you just restore your data from the backup, and pull the older image.

    I use the klausmeyer/docker-registry-browser, and that recently broke, but it just needed me to provide an additional configuration variable.

    I use advplyr/audiobookshelf, which upgraded to a different database engine and schema a couple months ago. For some small subset of people (including me) the migration to the new database didn’t go well. But I had a backup from 6 hours before the update, so restoring and then using the older image until the fixes were released was easy.

    Even with the occasional issues I prefer letting watchtower automatically update most of my images for my home. I don’t really want to spend my time manually applying updates when 98% of the time it will be fine. But again, having a reliable and tested backup system is an essential part of why I am comfortable doing this.

      • zoredache@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        My primary ‘backup’, or easy recovery method is that I use ZFS, and take snapshots via sanoid frequently. I have a mydumper jump making backups of my mariadb server. I use syncoid to doing sends to external storage. So most things can just be fixed by copying the files from an older snapshot.

        I also have a completely separate backups of my system made using borg to storage I have at borgbase.com, but this only happens a couple times a week, and is only my ‘important’ data and not large things like downloaded video/music/etc. I am thinking about switching borg out for restic though, since restic is also compatible with borgbase.