NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
SLAM-ER
Mar 10, 2019Aspirant
ReadyNAS Ultra-6 data degraded after firmware update
I have an old ReadyNAS Ultra-6 that I upgraded to Firmware 6.9.5. After the upgrade it shows data degraded. However the data is not degraded, all drives are healthy, I can still access the shares...
- Mar 11, 2019
Hi again
Thanks for posting the disk info. As suspected, disk sda has seen better days. From what I can see in the logs, the disk should be located in bay 1 (the first disk in the NAS). We can see that the disk has some Current Pending Sector errors.
Device: sda Controller: 0 Channel: 0 <<<=== Bay 1 Model: WDC WD60EFRX-68MYMN0 Serial: Firmware: 82.00A82 Class: SATA RPM: 5700 Sectors: 11721045168 Pool: data PoolType: RAID 5 PoolState: 3 PoolHostId: 33eac74a Health data ATA Error Count: 0 Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Current Pending Sector Count: 11 <<<=== Bad sectors on the disk Uncorrectable Sector Count: 0 Temperature: 32 Start/Stop Count: 136 Power-On Hours: 35247 Power Cycle Count: 77 Load Cycle Count: 15929
Pending sectors typically indicates imminent failure of the disk. An Acronis KB, describes the issue particularly well.Current Pending Sector Count S.M.A.R.T. parameter is a critical parameter and indicates the current count of unstable sectors (waiting for remapping). The raw value of this attribute indicates the total number of sectors waiting for remapping. Later, when some of these sectors are read successfully, the value is decreased. If errors still occur when reading some sector, the hard drive will try to restore the data, transfer it to the reserved disk area (spare area) and mark this sector as remapped.
Please also consult your machines's or hard disks documentation.
RecommendationsThis is a critical parameter. Degradation of this parameter may indicate imminent drive failure. Urgent data backup and hardware replacement is recommended.
https://kb.acronis.com/content/9133
It is quite likely that these bad sectors caused the NAS to kick the disk from the one of the data raids. The fact that those sectors appear stuck in "pending" is an indication that the sectors will probably never recover. Without further examination of the logs I'd say you need to replace that disk asap. Note: you must replace with a disk of same size or larger.
Your other disks appear to be healthy, which is good!
Cheers
Hopchen
Mar 11, 2019Prodigy
Hi SLAM-ER
Thanks for posting the mdstat log.
Firstly, let me clarify the behaviour of the NAS with the disk configuration that you have. The NAS will actually raid partitions together, not entire disks. So, when using different size disks as you do, the NAS will make a 2TB partition across all 6 disks and raid those partitions together in a raid 5. That will form one data raid - md126 in this instance.
md126 : active raid5 sdc3[8] sdf3[5] sde3[4] sdd3[3] sdb3[6] 9743324160 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU]
Next the NAS will take the 3 remaining larger disks and make 4TB partitions on each disk and raid those partitions together in a separate data raid. In this case, md127.
md127 : active raid5 sda4[0] sdc4[2] sdb4[1] 7813753856 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
Thereafter, the NAS sticks the two data raids together on the filesystem level in order to make it "one volume". So, what you are seeing with two data raids is perfectly normal when using different sized disks. Sandshark - FYI this will be same configuration whether he factory defaults or not.
With that out of the way, we can see that md126 is degraded. The partition from sda (one of the disks) is missing in this raid.
md126 : active raid5 sdc3[8] sdf3[5] sde3[4] sdd3[3] sdb3[6] <<<=== "sda3" missing as a paticipant here. 9743324160 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU] <<<=== raid notifying you that one disk is out of this raid.
This makes the md126 raid degraded - i.e. non-redundant anymore. Another disk failure will render the entire volume dead at this point so it needs to be addressed. There could be several reasons for the disk going missing in the md126 raid but a firmware update is not a likely suspect. What is more likely is that the "sda" disk has some dodgy sectors on the partition used for the md126 raid and thus the NAS might have kicked it from that raid upon boot.
What is the health of the disks overall? Can you post the output of the disk_info.log (masking any serial numbers for your disks)?
Thanks
SLAM-ER
Mar 11, 2019Aspirant
Device: sda
Controller: 0
Channel: 0
Model: WDC WD60EFRX-68MYMN0
Serial:
Firmware: 82.00A82
Class: SATA
RPM: 5700
Sectors: 11721045168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 11
Uncorrectable Sector Count: 0
Temperature: 32
Start/Stop Count: 136
Power-On Hours: 35247
Power Cycle Count: 77
Load Cycle Count: 15929
Device: sdb
Controller: 0
Channel: 1
Model: WDC WD60EFRX-68MYMN0
Serial:
Firmware: 82.00A82
Class: SATA
RPM: 5700
Sectors: 11721045168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 35
Start/Stop Count: 105
Power-On Hours: 37416
Power Cycle Count: 75
Load Cycle Count: 20776
Device: sdc
Controller: 0
Channel: 2
Model: ST6000NM0115-1YZ110
Serial:
Firmware: SN04
Class: SATA
RPM: 7200
Sectors: 11721045168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
End-to-End Errors: 0
Command Timeouts: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 37
Start/Stop Count: 16
Power-On Hours: 4434
Power Cycle Count: 6
Load Cycle Count: 5541
Device: sdd
Controller: 0
Channel: 3
Model: WDC WD20EFRX-68EUZN0
Serial:
Firmware: 82.00A82
Class: SATA
RPM: 5400
Sectors: 3907029168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 28
Start/Stop Count: 220
Power-On Hours: 19423
Power Cycle Count: 13
Load Cycle Count: 1799
Device: sde
Controller: 0
Channel: 4
Model: WDC WD20EFRX-68EUZN0
Serial:
Firmware: 82.00A82
Class: SATA
RPM: 5400
Sectors: 3907029168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 29
Start/Stop Count: 181
Power-On Hours: 19417
Power Cycle Count: 13
Load Cycle Count: 1794
Device: sdf
Controller: 0
Channel: 5
Model: WDC WD20EFRX-68EUZN0
Serial:
Firmware: 82.00A82
Class: SATA
RPM: 5400
Sectors: 3907029168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 29
Start/Stop Count: 223
Power-On Hours: 19423
Power Cycle Count: 14
Load Cycle Count: 1869
- HopchenMar 11, 2019Prodigy
Hi again
Thanks for posting the disk info. As suspected, disk sda has seen better days. From what I can see in the logs, the disk should be located in bay 1 (the first disk in the NAS). We can see that the disk has some Current Pending Sector errors.
Device: sda Controller: 0 Channel: 0 <<<=== Bay 1 Model: WDC WD60EFRX-68MYMN0 Serial: Firmware: 82.00A82 Class: SATA RPM: 5700 Sectors: 11721045168 Pool: data PoolType: RAID 5 PoolState: 3 PoolHostId: 33eac74a Health data ATA Error Count: 0 Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Current Pending Sector Count: 11 <<<=== Bad sectors on the disk Uncorrectable Sector Count: 0 Temperature: 32 Start/Stop Count: 136 Power-On Hours: 35247 Power Cycle Count: 77 Load Cycle Count: 15929
Pending sectors typically indicates imminent failure of the disk. An Acronis KB, describes the issue particularly well.Current Pending Sector Count S.M.A.R.T. parameter is a critical parameter and indicates the current count of unstable sectors (waiting for remapping). The raw value of this attribute indicates the total number of sectors waiting for remapping. Later, when some of these sectors are read successfully, the value is decreased. If errors still occur when reading some sector, the hard drive will try to restore the data, transfer it to the reserved disk area (spare area) and mark this sector as remapped.
Please also consult your machines's or hard disks documentation.
RecommendationsThis is a critical parameter. Degradation of this parameter may indicate imminent drive failure. Urgent data backup and hardware replacement is recommended.
https://kb.acronis.com/content/9133
It is quite likely that these bad sectors caused the NAS to kick the disk from the one of the data raids. The fact that those sectors appear stuck in "pending" is an indication that the sectors will probably never recover. Without further examination of the logs I'd say you need to replace that disk asap. Note: you must replace with a disk of same size or larger.
Your other disks appear to be healthy, which is good!
Cheers- SandsharkMar 12, 2019Sensei - Experienced User
Since many users (myself included) rarely re-boot their NAS except when up-dating the OS, that re-boot can ben the cause of discovery of a problem that has actually persisted for a while. Folks tend to blame the update, but that's rarely the root cause; it's just the catalyst for the requirement to reboot. You are likely in this group; so it's not "coincidence" at all; it's a convergence of circumstance.
11 pending sectors is potentially a problem, and drive replacement seems in order. Note that since this does add additional workload to the other old drives, insuring your backup is up to date before proceeding is good insurance.
- SLAM-ERMar 12, 2019Aspirant
Thanks for all the help guys, pity the web interface couldn't tell me this info, it seems the NAS has acces to all the data for it to do so... :/ Even the LCD panel was flashing all the drives as degraded, surely it should only flash the faulty drive (disk 1) so the user knows what to replace.
I have backed it up, and will replace the faulty drive when I can afford it. :D
Appreciate the assistance.
- StephenBMar 12, 2019Guru - Experienced User
It's the entire volume that is degraded, not a disk.
But I agree that Netgear needs to a better job of identifying which disk(s) have dropped out of the array in the web UI. Forcing users to delve into mdstat.log, etc isn't great.
Also, they should implement user-defined thresholds for disk health (pending sectors, etc). 11 pending sectors are well below their hard-wired thresholds, so you will just see a "green" healthy disk in the web ui. IMO the hard-wired thresholds are too high (and based on posts here, Netgear support agrees that is the case.).
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!