Mine is in the picture: 1544 days and counting!

It’s an EC2 nano instance that’s used only as a monitor for a few services that are running inside my VPN. It has served me well over all these years!


EDIT: before everyone starts screaming about “security”:
It’s not internet facing and no port is opened, all it does is fire up a notification if/when something doesn’t reply.

Even in the unlikely scenario that someone gain access to it that means that my VPN is already compromised, and I’ve got bigger problems to worry about.

  • WxFisch@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    1 year ago

    So you never apply patches or updates, that seems like an odd thing to be proud of but different strokes for different folks I guess.

    • abeltramo@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      9
      ·
      1 year ago

      It’s not internet facing and no port is opened, all it does is fire up a notification if/when something doesn’t reply.

      Even in the unlikely scenario that someone gain access to it (nobody did in the last ~4 years) that means that my VPN is already compromised and I’ve got bigger problems to worry about.

      • Makussu@feddit.de
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 year ago

        Makes sense but even then i would just run automatic updates every few months. Just to keep best practice. Nonetheless cool uptime, now do 10 years :)

        • abeltramo@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          1 year ago

          Well now it’s becoming kind of a challenge: will AWS terminate/migrate the instance at some point, or will I be forced to reboot?

  • tal@kbin.social
    link
    fedilink
    arrow-up
    18
    ·
    1 year ago

    I remember this story from about twenty years back hitting the news:

    https://www.theregister.com/2001/04/12/missing_novell_server_discovered_after/

    Missing Novell server discovered after four years

    In the kind of tale any aspiring BOFH would be able to dine out on for months, the University of North Carolina has finally located one of its most reliable servers - which nobody had seen for FOUR years.

    One of the university’s Novell servers had been doing the business for years and nobody stopped to wonder where it was - until some bright spark realised an audit of the campus network was well overdue.

    According to a report by Techweb it was only then that those campus techies realised they couldn’t find the server. Attempts to follow network cabling to find the missing box led to the discovery that maintenance workers had sealed the server behind a wall.

  • sv1sjp@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    Personally I am shutting down my server in the midnights to make it relax for a bit. #MentallySupportingOurHomeServers Butt yes, I still agree with the comments above, even if theserver is not directly connected on Internet, upgrading is mandstory nowadays. Bots are everywhere, especially nowadays with all of these AI tools.

  • PeterPoopshit@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I think I got up to 300 or so days on my old Athlon XP Gentoo server. I have “upgraded” since then and my current server can’t go more than 2 days. I have an arduino connected to the motherboards reset button pin that resets it whenever the bash script that communicates with the arduino stops running but even that somehow still crashes at least once a week and needs manual intervention.

  • JoeKrogan@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I don’t know it depends on the patches really I have automatic updates so I guess a few months would probably be the longest between kernel patches

  • SleepyBear@lemmy.myspamtrap.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Many years ago working for a monitoring software company someone had found a bug in the uptime monitoring rules where they reset after a year.

    It was patched and I upgraded one client and their whole Solaris plant immediately went red and alerted. They told me to double it to two years and some stuff was still alerting.

    They just said they’d try to get around to rebooting it, but it was all stable.

    Everywhere else I’ve worked enforces regular reboots.

  • negativenull@negativenull.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    My father ran an HP-UX server that did inventory management (not internet connected) that had an uptime greater than 10 years before it was migrated.

      • xebix@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        1 year ago

        I logged in just to downvote.

        Now for a relevant comment. I used to love those high uptime values as well, but I’ll echo the security sentiments of others in this thread. On the other hand, as you said it’s not public facing, so not as big a deal. I still think it’s kinda cool!

        • abeltramo@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          1 year ago

          Thanks, I wasn’t expecting everyone to take this so seriously, it was supposed to be funny…

          • eric5949@lemmy.cloudaf.site
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            1 year ago

            Well propegating the idea it’s cool to have years long uptimes regardless of the fact it may be practical for you in this instance iis nonetheless dangerous.

        • jax@lemmy.cloudhub.social
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          1 year ago

          Just because it’s not public facing doesn’t mean that it’s not an issue. It might be less of an issue, but it is still a massive vulnerability.

          All it takes is one misconfiguration or other vulnerable system to use this as a jumping off point to burrow into other systems. Especially if this system has elevated access to sensitive locations within your network.