I have recently become interested in mini PCs, but one thing that is stopping me is a feeling that bit rot could cause me to lose data.

Is bit rot something to worry about when storing data for services such as Git, or Samba. I have another PC right now that is setup with btrfs raid1 and backups locally and to the cloud, however was thinking about downsizing for the benefit of size and power usage.

I know many people use the mini PCs such as ThinkCentres, Optiplex, EliteDesks and others, I am curious if I should be worried about losing data due to bit rot, or is bit rot a really rare occurrence?

Let’s say I have backups with a year of retention, wouldn’t it be possible that the data becomes corrupt and that it isn’t noticed until after a year? for example archived data that I don’t look at often but might need in the future.

  • dragontamer@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    edit-2
    10 months ago

    Wait, what’s wrong with issuing “ZFS Scan” every 3 to 6 months or so? If it detects bitrot, it immediately fixes it. As long as the bitrot wasn’t too much, most of your data should be fixed. EDIT: I’m a dumb-dumb. The term was “ZFS scrub”, not scan.

    If you’re playing with multiple computers, “choosing” one to be a NAS and being extremely careful with its data that its storing makes sense. Regularly scanning all files and attempting repairs (which is just a few clicks with most NAS software) is incredibly easy, and probably could be automated.

    • A MouseOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      I guess, my primary concern was if I didn’t have the computer with ZFS(in my case btrfs but similar thing). Maybe it is for the best that I keep the raid setup to scrub and make sure important data is safe, and use the smaller single disk mini PC for services and data that isn’t as important.

      • dragontamer@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        10 months ago

        If you have a NAS, then just put iSCSI disks on the NAS, and network-share those iSCSI fake-disks to your mini-PCs.

        iSCSI is “pretend to be a hard-drive over the network”. iSCSI can exist “after” ZFS or BTRFS, meaning your scrubs / scans will fix any issues. So your mini-PC can have a small C: drive, but then be configured so that iSCSI is mostly over the D: iSCSI / Network drive.

        iSCSI is very low-level. Windows literally thinks its dealing with a (slow) hard drive over the network. As such, it works even in complex situations like Steam installations, albeit at slower network-speeds (it gotta talk to the NAS before the data comes in) rather than faster direct connection to hard drive (or SSD) speeds.


        Bitrot is a solved problem. It is solved by using bitrot-resilient filesystems with regular scans / scrubs. You build everything on top of solved problems, so that you never have to worry about the problem ever again.

        • A MouseOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Thanks for that information about iSCSI, I hadn’t looked into it. I will probably just stick with my primary server for the moment, maybe rebuild it into a NAS, and than use mini PCs with it as the storage.

    • markstos@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      You don’t define bitrot. If you leave software alone with no updates for long enough, yes, there will be problems.

      There will eventually be a security issue with no fix, or a new OS or hardware it doesn’t work on.

      Backups can also fail over time if restores are not tested periodically.

      This recently happened to me. A server wouldn’t boot anymore, so we restored from backup, but it still wouldn’t boot. The issue was that we’d introduced change that caused a boot failure. To fix that by restoring from a backup, we’d need a backup from before that change. It turns out we had one, but didn’t realize what the issue was.

      The other moral is to reboot frequently if only to confirm the system can still boot.

      • dragontamer@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        10 months ago

        That’s not what storage engineers mean when they say “bitrot”.

        “Bitrot”, in the scope of ZFS and BTFS means the situation where a hard-drive’s “0” gets randomly flipped to “1” (or vice versa) during storage. It is a well known problem and can happen within “months”. Especially as a 20-TB drive these days is a collection of 160 Trillion bits, there’s a high chance that at least some of those bits malfunction over a period of ~double-digit months.

        Each problem has a solution. In this case, Bitrot is “solved” by the above procedure because:

        1. Bitrot usually doesn’t happen within single-digit months. So ~6 month regular scrubs nearly guarantees that any bitrot problems you find will be limited in scope, just a few bits at the most.

        2. Filesystems like ZFS or BTFS, are designed to handle many many bits of bitrot safely.

        3. Scrubbing is a process where you read, and if necessary restore, any files where bitrot has been detected.

        Of course, if hard drives are of noticeably worse quality than expected (ex: if you do have a large number of failures in a shorter time frame), or if you’re not using the right filesystem, or if you go too long between your checks (ex: taking 25 months to scrub for bitrot instead of just 6 months), then you might lose data. But we can only plan for the “expected” kinds of bitrot. The kinds that happen within 25 months, or 50 months, or so.

        If you’ve gotten screwed by a hard drive (or SSD) that bitrots away in like 5 days or something awful (maybe someone dropped the hard drive and the head scratched a ton of the data away), then there’s nothing you can really do about that.

  • synthsalad@mycelial.nexus
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    10 months ago

    Nightly automated runs of the chkbit script is the only thing that alerted me to the fact that either the SSD or storage controller in my Mac Mini had issues and was corrupting data. I was very thankful to have already had the automation in place for that exact scenario.

    It theoretically shouldn’t be necessary for filesystems that have built-in checksumming.

      • synthsalad@mycelial.nexus
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        10 months ago

        This is what I use, will work with any filesystem (it writes hashes in hidden/dot files) and on any OS as long as Python is available: https://pypi.org/project/chkbit/

        It runs ahead of my nightly backup. If it fails, the backup won’t proceed.

        Edit: Because the script relies on hashing files, it uses tons of both disk IO and CPU when it runs, but the tradeoff is worthwhile to me.

  • NullPointerException@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    ·
    10 months ago

    honestly I don’t think it’s really a significant issue but if you’re worried just use a fs that can repair itself like zfs (not sure if btrfs can do that too but it might)

    • hedgehog@ttrpg.network
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 months ago

      And if you’re really concerned about data integrity then you should also ensure that your server has ECC RAM.

    • vividspecter@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      (not sure if btrfs can do that too but it might)

      It can. And they’ll both alert you of problems if you do regular scrubs, which might be enough even with non-raid installs, if you have secondary backups.

  • SheeEttin@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    10 months ago

    How life-or-death critical would it be if you lost one of those files?

    Resilient filesystems/raid/multiple backup points should be more than enough.

    • mustardman@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      10 months ago

      Resilient filesystems/raid/multiple backup points should be more than enough.

      A word of caution on relying on backups without the other types of error prevention you mention: If it takes you a while to notice that bitrot has ruined a file, then it may have already propagated through your backups. The only type of backups that would account for this is archival backups, such as on tape or quality bluray discs.

      • A MouseOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Yeah, that’s kind of what I expected and I am now thinking of keeping my setup how it is currently and getting a mini PC for less important data and services to tinker with.

    • A MouseOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      That is a very good question, it makes me think of better organization for my data. Data such as task lists, and daily notes aren’t necessarily very important, while family photos and documents would be more important.

      • SeriousBug@infosec.pub
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        For any family photos and documents you can’t afford to lose, make sure you have backups of it. A RAID array does not mean you don’t need backups: you want at least 3 copies, at least one offsite.

        The copy in your RAID array is one copy. You can back that up to an external hard drive or something as a second copy. Then have an offsite backup on something like Backblaze as your third copy.

        • A MouseOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          Thanks for the reassurance. What I currently have is exactly that, RAID for the local data, and a spare drive that is mounted and unmount when data is backed up, and that is rsynced offsite to a cloud provider. I figured that my current setup was really reliable as I had slowly been researching and working on this over a few years.

          I have a sort of itch to play with a mini PC, I guess it would be best not to hurt any of my important data by downgrading the setup, however this is a good time to really sort and figure out what I need and is important and what isn’t as important and can be reobtained if something fails on the mini PC.

      • thelittleblackbird@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Save yourself a headache and use btrfs/zfs with periodically checks as suggested in another post.

        Who cares if it is a problem or not when it has a simple and inexpensive solution.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    RAID Redundant Array of Independent Disks for mass storage
    SSD Solid State Drive mass storage

    [Thread #102 for this sub, first seen 1st Sep 2023, 18:15] [FAQ] [Full list] [Contact] [Source code]