Researchers at the University of Southampton in the UK successfully stored the entirety of the human genome sequence onto an indestructible 5D optical memory crystal no bigger than a penny. The indestructibility claims are no joke since the discs can withstand temperatures up to 1,000°C, cosmic radiation, and even direct impact forces of 10 tons per cm2.
They say “billions of years” but that sounds like just the sort of thing a stray cosmic ray would ruin.
Maybe they’re planning on using a checksum for error correction like they do with RAID.
On that timescale, what are the odds that the checksum is still reliable?
Why would it be any different from the real data? Checksumming is basically just writing extra copies with math.
I’m asking why it would be more reliable if it has the same vulnerability to being corrupted.
Checksums are redundancy.
Right, but if the checksum is corrupted…
Yes Mr smarty pants, if all copies of data are corrupted the data is lost. More redundancy is more protection.
It isn’t writable
So?
Say checksum again
Checksum
What would be the point? You would just know that the data is invalid. You couldn’t fix it
Use the checksum to correct the read, just like always. You don’t repair damaged ROM anyway.
You can’t
That’s not what a checksum is
Don’t make me show you the wikipedia article.
Can’t argue with that logic.
I guess I will go back to using dd to hack the Pentagon
They probably mean EC code? That said, you can use checksums to “correct” errors if you have redundant copies of the data (by reading from the other copy if one copy has a bad checksum)
True but that isn’t possible with just a checksum and a read only medium