NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
jedas
Dec 02, 2014Aspirant
Disk spin-down problem, RN104, fw 6.2.0
Hello, I'm using RN104, 2 x 4GB WD RED in JBOD/Flex mode (I need no raid). No apps installed, samba, dlna services stopped to debug this issue. Http/https/ssh - enabled. I've enabled disk spin down...
StephenB
Dec 05, 2014Guru - Experienced User
I think a lot depends on what you think bit rot is. Here's one article that is using the silent corruption definition: http://arstechnica.com/information-tech ... lesystems/
Most of the bit-rot discussions on this forum were triggered from the article, so I think generally posters here are aligned with this understanding of the definition.
How often you think this happens in practice, and what the mechanisms for this "rot" are is an interesting question, and I suspect there are different views. Personally I don't think disks are likely to return wrong data when they fail. Like you, I believe it is far more likely that the disk's own error checks will uncover the problem, and it will return a read error.
Cloning a bad disk is one obvious way it can happen in real life. Software errors or crashes with uncompleted writes in the queue could of course do it as well. Memory failures that corrupt the data in the cache (before it is written) could do it. But personally I don't believe it happens spontaneously on the disk itself.
He then goes on to say that RAID-5 didn't find the error, but that the btrfs experimental raid feature did find and fix it. No read errors are happening, the data just went wrong somehow. (BTW, A raid-5 scrub would have detected it, but it wouldn't know if the parity block or the data block was wrong. In principle a RAID-6 scrub could be written which would find/repair the error, but I don't know if a normal RAID-6 scrub would do so).
...As a test, I set up a virtual machine with six drives. One has the operating system on it, two are configured as a simple btrfs-raid1 mirror, and the remaining three are set up as a conventional raid5. I saved Finn's picture on both the btrfs-raid1 mirror and the conventional raid5 array, and then I took the whole system offline and flipped a single bit—yes, just a single bit from 0 to 1—in the JPG file saved on each array.
Most of the bit-rot discussions on this forum were triggered from the article, so I think generally posters here are aligned with this understanding of the definition.
How often you think this happens in practice, and what the mechanisms for this "rot" are is an interesting question, and I suspect there are different views. Personally I don't think disks are likely to return wrong data when they fail. Like you, I believe it is far more likely that the disk's own error checks will uncover the problem, and it will return a read error.
Cloning a bad disk is one obvious way it can happen in real life. Software errors or crashes with uncompleted writes in the queue could of course do it as well. Memory failures that corrupt the data in the cache (before it is written) could do it. But personally I don't believe it happens spontaneously on the disk itself.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!