RAID is not a backup strategy. I use an “oh well”™ strategy. When my last hard drive failed, I said “oh well”™, bought a new SSD, and started from scratch. My patented “oh well”™ system works for both Linux and Windows. Learn how with only three easy courses, from £1495 each. Sign up today!
Microsoft offers a similar product called o365. Only your plan seems to be cheaper.
Also: don’t wait until you got the perfect setup. A bad/incomplete backup is better than no backup.
THIS! RIGHT HERE!
When I was young and naive about digital things, I had NO BACKUP
One day I got a new laptop. Yay me. Transfer all the data from my old hard drive using some jank-ass local network setup because young and dumb about tech still.
Six months go by, and my new laptop shit itself. Still no idea what happened, but it BSODd and a factory reset got it working again.
I still had my old laptop, so after about a week of searching on forums and reading everything I could find about how to build a pc, how laptop internals compare, data transfers, and literally anything I could so I could pull the old hard drive out without damaging anything and get at least some of my data without issue…
I lost 6 months of new stuff on a much more capable laptop, but it’s better than losing EVERYTHING.
This is something I should be doing.
I need quite a lot of space for backups and I don’t have enough space for them. I should at least start with partial backups with whatever I can fit in the storage I have.
My weak point right now is off-site, and homelab. My homelab isn’t backed up at all, and my personal data is only backed up on-site.
It’s better than nothing, but I should be doing better. I work in IT afterall. I think this is in the same vein as the mechanics car…
“Son, the time has finally come. Today I’m going to teach you TNO (Trust No One) security.”
my two-year-old stares blankly at me
They’ll be ready to learn about Cryptography, Chains Of Trust and Two Channel Authentication by the age of 3!
Testing restore time is a key part of being a miracle worker. That way you can tell them it’ll take three times as long.
Need to factor in the buffer time
Scotty, is that you?
looks at my raid of HDDs backing up my raid of SSDs
also before everyone slaps me with “why aren’t you using zfs?” it’s because I keep swapping out drives and testing various file systems whenever they get new features, which mdadm will accept even with the most insane unreccomended setups.
Also using it in my mad pursuit to figure out what Stratis is supposed to be useful for.
I like to annoy my IT friends by saying my backup strategy is chucking what little important data I have in my free Dropbox account.
It’s not even that important; I don’t care!
Why does that annoy them? That’s an off-site backup.
For one it’s not a full, real backup strategy. That’s supposed to include multiple tiers.
Also it’s instantly synced, so if I bonk my stuff locally, it could be bonked over there and history might not be able to save me depending on the situation.
And I guess if Dropbox dies my data dies.
Some people take their data seriously enough to worry about that kind of stuff. I don’t.
As someone working in IT, I don’t care about your data either. Just don’t come crawling to me for help when it all goes wrong.
That being said, one drive/drop box/Google drive/whatever cloud storage… IMO, that’s fine for personal files.
I don’t personally like the “full disk” backups for personal stuff. It seems like massive overkill. Like, you’re backing up Windows and applications that are probably out of date, and stuff… Why? Plus restoring a full disk image to a bare-metal system is a massive pain in the backside. Unless it’s a server that needs to get up and running ASAP after a failure, just back up the important/unique files (generally the user folder), and if the worst happens, reinstall everything and restore the important files.
The only point on my IT approved backup list that you don’t meet the criteria for is incremental/historical snapshots or restore points. Bluntly, if you’re okay not having those and accepting the liability that if the files get deleted by accident using a legitimate method to delete them, then that’s your risk to accept.
None of what you’re doing ruffles my jimmies. As long as you’re making an informed decision, and accepting the risks, then the rest is on you. If you don’t care, then I certainly don’t care either.
Heard a lot of horror stories of sites promising a free tier, then making the free tier a lower storage space and locking accounts that go over after a while.
Dropbox is not the best, but if you manage backups properly by storing in more than one space, then its fine.
least it’s better than using chess
If you want to annoy them, tell them you just take snapshots as your backup strategy.
I mean sure you could be sync’ing your snapshots elsewhere as a real backup strategy but you don’t need to tell them that.
Experienced that recently. Every part of the backup was fine, except for the one with all the 2fa codes. Fun times.
Ooooooooooooooof
That stings
My house burned down right after building my first raid array. It hadn’t even been put into use. The plan was to move all the data from assorted servers, desktops and laptops in my house to the array THEN backup that volume to something offsite. /sigh
I have a RPi running nextcloud and a second RPi rsyncing all the files weekly from it, nothing off-site, but at least it’s two separate drives on different machines. Anything that I can improve here(cheaply)?
You might consider what you would do if your source has an issue that syncs the to your off-site copy. If it isn’t a lot of data, you might want to keep another copy or two in either location that is created at a less frequent schedule but would give you a fall back.
As an example, if your files got ransomware encrypted and then sync’d to the off-site location, how would you recover your data?
Depends on how deep down the rabbit hole you’d like to, you can always add complexity to any backup or redundancy system. Is your server used for just personal use?
my backup is once in a while manually borging my files onto this chonk, but i have plans to improve on that
I have 8 of these chonks in a raid array. Am I backed up yet?
(No)
I allways say “an untested backup does not exist”
A backup that is untested is a backup that does not exist?
A backup that is untested does not exist?
There exists no backup that is untested?
Any interpretation of your statement seems cautionary to me. :-)
E: typo
I don’t understand what you are trying to say, my point is that until you have tested your backups you can’t rely on them, and thus should not bring them into consideration when planning disaster recovery procedures until you know they are good.
I am not refuting or opposing your statement. I understood your point very well, but the brevity of your statement led to more than one interpretation.
I am merely pointing out whichever way one interprets your statement, it serves as a good warning about keeping one’s backups tested.
- RAID SHADOW LEGENDS
It INFURIATES me how many companies will spend money on backups, but not ever test that their backups restore or allow for continued functionality afterwards.
At one company, I banged this drum for years, and one day we had a situation where someone “accidentally” deleted all the media from a client website. I had to dig through several backups and rebuild from beta, which annoyed me endlessly, but I dropped the “I fucking told you so” several times, and hinted that our “restore scripts weren’t working as intended” to the client. It took me a full day to do what should have taken maybe 1-2 hours at most…
Alright. I have a confession to make…I actually DID make sure my eldest son heard ALL of these things… (And more, but wow…the WHOLE list…)
#proudNerdDadMoment
I lost a devastating bunch of data when I was very young. Lots of it irreplaceable. Since then I’ve always tried to have a redundant backup strategy