As the title says, i recently printed a nice case for my RPi3 and HDD that intend to run as an offsite backup machine.

Looking for recommendations on what backup service to run. I want to backup my Nextcloud and a “changes only” backup/cloning solution would be optimal but i have yet to find one.

  • cestvrai@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    It’s basic, but rsync is a reliable changes-only solution. You can do push or pull on a cronjob.

    • user@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Would rsync corrupt the backup if the main file gets corrupted (seeing as this would be a change) ?

    • tal@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Duplicity uses rsync at the application level. I have used that. I’m presently using rdiff-backup, driven by backupninja out of a cron job, to backup to a local hard drive and which does incremental backups (which would address @Nr97JcmjjiXZud’s concern). There’s also rsbackup, which also uses rsync and I have not used.

      Two caveats I’d note that may or may not be a concern for one’s specific use case (which apply to rdiff-backup, and I believe both also apply to the other two rsync-based solutions above, though it’s been a while since I’ve looked at them, so don’t quote me on that):

      • One property that a backup system can have is to make backups immutable – so that only the backup system has the ability to purge old backups. That could be useful if, for example, the system with the data one is preserving is broken into – you may not want someone compromising the backed up system to be able to wipe the old backups. Rdiff-backup expects to be able to connect to the backup system and write to it. Unless there’s some additional layer of backups that the backup server is doing, that may be a concern for you.

      • Rdiff-backup doesn’t do dedup of data. That is, if you have a 1GB file named “A” and one byte in that file changes, it will only send over a small delta and will efficiently store that delta. But if you have another 1GB file named “B” that is identical to “A” in content, rdiff-backup won’t detect that and only use 1GB of storage – it will require 2GB and store the identical files separately. That’s not a huge concern for me, since I’m backing up a one-user system and I don’t have a lot of duplicate data stored, but for someone else’s use case, that may be important.

      • adr1an@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Try rsnapshot, it’s rsync with hard links. Though, is better to use snapshots on filesystem (be it zfs, btrfs, or another one with such a feature… Might be CoW is required, never thought much about it …)

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Rsync has a bunch of downsides though. It only gives you one backup, any corrupted files will be mirrored in their corrupted state with no way to go back to an old version, and if the client system is hacked, the attacker can delete the remote backups. Not ideal.

      Something like Borgbackup is much better. It dedupes blocks so storing months of daily backups isn’t an issue, and it has an “append-only” mode that prevents the client from deleting backups. Even if the client system is hacked, the attacker can’t delete the backups.