× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

ReadyNAS Ultra-6 data degraded after firmware update

SLAM-ER
Aspirant

ReadyNAS Ultra-6 data degraded after firmware update

I have an old ReadyNAS Ultra-6 that I upgraded to Firmware 6.9.5.  After the upgrade it shows data degraded.  However the data is not  degraded, all drives are healthy,  I can still access the shares, and have copied files off them and they seem to work fine.   

 

When I go into the web interface I can see it has 2x RAID groups listed that share some (but not all) of the same HDDs!??  Is this normal (or even possible)?   I think maybe during the upgrade it found an old array config and reactivate it causing my issue...?  I dunno.  

 

While I have copied off some of the more important data, I do not have sufficient space to copy off the remainder (non-critical stuff, but I'd rather not have to download it all again).  So before I blow it all away and start from scratch, is there a way to use the console through SSH to fix it without wiping the config?  I know nothing about Linux so I'm not keen to start blindly trying stuff, but I can follow instructions... 

 

I don't even know which of the RAID groups is correct, if either.  It used to have 6x2TB drives, and I'd swapped in 3x 6TB drives a while back, so now I'm not sure what the correct configuration should be as it's been a while since I looked at it.

 

Anyway, if anyone has any instructions on how to diagnose or fix this weird issue I'd be grateful for the help.  If not I guess I will just wipe the config and start from scratch.  😞

 

Thanks

Matthew

 

Model: ReadyNASRNDU6000|ReadyNAS Ultra 6 Chassis only
Message 1 of 16

Accepted Solutions
Hopchen
Prodigy

Re: ReadyNAS Ultra-6 data degraded after firmware update

Hi again

 

Thanks for posting the disk info. As suspected, disk sda has seen better days. From what I can see in the logs, the disk should be located in bay 1 (the first disk in the NAS). We can see that the disk has some Current Pending Sector errors.

Device: sda
Controller: 0
Channel: 0 <<<=== Bay 1
Model: WDC WD60EFRX-68MYMN0
Serial:
Firmware: 82.00A82
Class: SATA
RPM: 5700
Sectors: 11721045168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 11 <<<=== Bad sectors on the disk
Uncorrectable Sector Count: 0
Temperature: 32
Start/Stop Count: 136
Power-On Hours: 35247
Power Cycle Count: 77
Load Cycle Count: 15929


Pending sectors typically indicates imminent failure of the disk. An Acronis KB, describes the issue particularly well.

 

Current Pending Sector Count S.M.A.R.T. parameter is a critical parameter and indicates the current count of unstable sectors (waiting for remapping). The raw value of this attribute indicates the total number of sectors waiting for remapping. Later, when some of these sectors are read successfully, the value is decreased. If errors still occur when reading some sector, the hard drive will try to restore the data, transfer it to the reserved disk area (spare area) and mark this sector as remapped.

Please also consult your machines's or hard disks documentation.
Recommendations

This is a critical parameter. Degradation of this parameter may indicate imminent drive failure. Urgent data backup and hardware replacement is recommended.

https://kb.acronis.com/content/9133

 

It is quite likely that these bad sectors caused the NAS to kick the disk from the one of the data raids. The fact that those sectors appear stuck in "pending" is an indication that the sectors will probably never recover. Without further examination of the logs I'd say you need to replace that disk asap. Note: you must replace with a disk of same size or larger.

 

Your other disks appear to be healthy, which is good!


Cheers

 

 

View solution in original post

Message 12 of 16

All Replies
Hopchen
Prodigy

Re: ReadyNAS Ultra-6 data degraded after firmware update

Hey @SLAM-ER 

 

Sounds like a disk probably dropped from the raid.

 

Can you download the logs and post the output of: mdstat.log

You can just post the first section, called "Personalities"

 

 

Message 2 of 16
Sandshark
Sensei

Re: ReadyNAS Ultra-6 data degraded after firmware update

Since you say you "swapped out" the 6TB drives, I'm assuming the other bays stll have the 2TBs.  So, yes, you'll have two RAID groups.  One 6x2TB and one 3x4TB.  Only a factory default will change that (but there is normally no need to do so).  "Degraded" means "no redundancy", not "no access", so it's likely true.  Since you have no full backup, you need to fix that before you lose another drive and the volume does become "dead".  Unfortunately, fixing it could put your other drives at higher risk if a resync is needed (which I think it will).  So fixing the no backup issue should also be on your short list.

 

It is odd that all drives show green if the volume is degraded, unless it is currently re-syncing.  @Hopchen should be able to tell you more from the log, but if you hover over the green dot, do all drives say they are part of volume "data"?

Message 3 of 16
SLAM-ER
Aspirant

Re: ReadyNAS Ultra-6 data degraded after firmware update

It was all 2tb drives then I swapped in 6tb drives one at a time so now it has 3x6tb and 3x2rb.

Yeah I did more reading and saw that multiple raid groups is result of raid-x expansion.

All drives are green, all listed as part of 'data'. On the unit LCD display where it says degraded all the drive bays are shown and flashing, whether this means they are failed or just populated I don't know.

I will post logs etc when I get home.
Message 4 of 16
SLAM-ER
Aspirant

Re: ReadyNAS Ultra-6 data degraded after firmware update

Also, the issue only started at the reboot after flashing to 6.9.5, so would be uncanny timing if it was just a failed drive....
Message 5 of 16
SLAM-ER
Aspirant

Re: ReadyNAS Ultra-6 data degraded after firmware update

Was gonna post logs, but cant attach zips, can't rename to png or other and attach, cant post the the raw txt as message character limit is 2000chars...  i give up for now.  

Message 6 of 16
Hopchen
Prodigy

Re: ReadyNAS Ultra-6 data degraded after firmware update

Hi @SLAM-ER 

 

I meant just post the first like 10 lines of the mdstat.log 🙂

But alternatively, upload the log-set to a Google link or similar and PM me the link. I will take a look at them then.

 

Cheers

 

 

Message 7 of 16
SLAM-ER
Aspirant

Re: ReadyNAS Ultra-6 data degraded after firmware update

Oh OK thanks, I didnt know which log was relevant...

 

Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md126 : active raid5 sdc3[8] sdf3[5] sde3[4] sdd3[3] sdb3[6]
9743324160 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU]

md127 : active raid5 sda4[0] sdc4[2] sdb4[1]
7813753856 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md1 : active raid10 sdb2[0] sdf2[4] sde2[3] sdd2[2] sdc2[1]
1308160 blocks super 1.2 512K chunks 2 near-copies [5/5] [UUUUU]

md0 : active raid1 sdd1[3] sdc1[6] sdb1[7] sda1[8] sdf1[5] sde1[4]
4190208 blocks super 1.2 [6/6] [UUUUUU]

unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Thu Jun 7 20:37:32 2018
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent

Update Time : Mon Mar 11 11:29:57 2019
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0

Name : 33eac74a:0 (local to host 33eac74a)
UUID : 2744a7e7:b794ca6f:aa3786f5:ba651776
Events : 31405

Number Major Minor RaidDevice State
3 8 49 0 active sync /dev/sdd1
4 8 65 1 active sync /dev/sde1
5 8 81 2 active sync /dev/sdf1
8 8 1 3 active sync /dev/sda1
7 8 17 4 active sync /dev/sdb1
6 8 33 5 active sync /dev/sdc1
/dev/md/1:
Version : 1.2
Creation Time : Fri Feb 22 20:41:39 2019
Raid Level : raid10
Array Size : 1308160 (1277.50 MiB 1339.56 MB)
Used Dev Size : 523264 (511.00 MiB 535.82 MB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent

Update Time : Wed Feb 27 20:39:47 2019
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

Layout : near=2
Chunk Size : 512K

Name : 33eac74a:1 (local to host 33eac74a)
UUID : d40c0f60:7a479855:513d41ad:b4bc4e67
Events : 19

Number Major Minor RaidDevice State
0 8 18 0 active sync /dev/sdb2
1 8 34 1 active sync /dev/sdc2
2 8 50 2 active sync /dev/sdd2
3 8 66 3 active sync /dev/sde2
4 8 82 4 active sync /dev/sdf2
/dev/md/data-0:
Version : 1.2
Creation Time : Thu Jun 7 20:38:13 2018
Raid Level : raid5
Array Size : 9743324160 (9291.96 GiB 9977.16 GB)
Used Dev Size : 1948664832 (1858.39 GiB 1995.43 GB)
Raid Devices : 6
Total Devices : 5
Persistence : Superblock is persistent

Update Time : Mon Mar 11 11:30:05 2019
State : clean, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Name : 33eac74a:data-0 (local to host 33eac74a)
UUID : 14e4a2ae:cad6d7ae:bd0ecea1:3d0a849e
Events : 184059

Number Major Minor RaidDevice State
- 0 0 0 removed
8 8 35 1 active sync /dev/sdc3
6 8 19 2 active sync /dev/sdb3
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
5 8 83 5 active sync /dev/sdf3
/dev/md/data-1:
Version : 1.2
Creation Time : Sat Jan 5 15:53:23 2019
Raid Level : raid5
Array Size : 7813753856 (7451.78 GiB 8001.28 GB)
Used Dev Size : 3906876928 (3725.89 GiB 4000.64 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Mon Mar 11 11:30:05 2019
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Name : 33eac74a:data-1 (local to host 33eac74a)
UUID : cecbaa25:ae602552:29c26317:d86b6fdf
Events : 8770

Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
1 8 20 1 active sync /dev/sdb4
2 8 36 2 active sync /dev/sdc4

Message 8 of 16
SLAM-ER
Aspirant

Re: ReadyNAS Ultra-6 data degraded after firmware update

Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md126 : active raid5 sdc3[8] sdf3[5] sde3[4] sdd3[3] sdb3[6]
9743324160 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU]

md127 : active raid5 sda4[0] sdc4[2] sdb4[1]
7813753856 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md1 : active raid10 sdb2[0] sdf2[4] sde2[3] sdd2[2] sdc2[1]
1308160 blocks super 1.2 512K chunks 2 near-copies [5/5] [UUUUU]

md0 : active raid1 sdd1[3] sdc1[6] sdb1[7] sda1[8] sdf1[5] sde1[4]
4190208 blocks super 1.2 [6/6] [UUUUUU]

unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Thu Jun 7 20:37:32 2018
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent

Update Time : Mon Mar 11 11:29:57 2019
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0

Name : 33eac74a:0 (local to host 33eac74a)
UUID : 2744a7e7:b794ca6f:aa3786f5:ba651776
Events : 31405

Number Major Minor RaidDevice State
3 8 49 0 active sync /dev/sdd1
4 8 65 1 active sync /dev/sde1
5 8 81 2 active sync /dev/sdf1
8 8 1 3 active sync /dev/sda1
7 8 17 4 active sync /dev/sdb1
6 8 33 5 active sync /dev/sdc1
/dev/md/1:
Version : 1.2
Creation Time : Fri Feb 22 20:41:39 2019
Raid Level : raid10
Array Size : 1308160 (1277.50 MiB 1339.56 MB)
Used Dev Size : 523264 (511.00 MiB 535.82 MB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent

Update Time : Wed Feb 27 20:39:47 2019
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

Layout : near=2
Chunk Size : 512K

Name : 33eac74a:1 (local to host 33eac74a)
UUID : d40c0f60:7a479855:513d41ad:b4bc4e67
Events : 19

Number Major Minor RaidDevice State
0 8 18 0 active sync /dev/sdb2
1 8 34 1 active sync /dev/sdc2
2 8 50 2 active sync /dev/sdd2
3 8 66 3 active sync /dev/sde2
4 8 82 4 active sync /dev/sdf2
/dev/md/data-0:
Version : 1.2
Creation Time : Thu Jun 7 20:38:13 2018
Raid Level : raid5
Array Size : 9743324160 (9291.96 GiB 9977.16 GB)
Used Dev Size : 1948664832 (1858.39 GiB 1995.43 GB)
Raid Devices : 6
Total Devices : 5
Persistence : Superblock is persistent

Update Time : Mon Mar 11 11:30:05 2019
State : clean, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Name : 33eac74a:data-0 (local to host 33eac74a)
UUID : 14e4a2ae:cad6d7ae:bd0ecea1:3d0a849e
Events : 184059

Number Major Minor RaidDevice State
- 0 0 0 removed
8 8 35 1 active sync /dev/sdc3
6 8 19 2 active sync /dev/sdb3
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
5 8 83 5 active sync /dev/sdf3
/dev/md/data-1:
Version : 1.2
Creation Time : Sat Jan 5 15:53:23 2019
Raid Level : raid5
Array Size : 7813753856 (7451.78 GiB 8001.28 GB)
Used Dev Size : 3906876928 (3725.89 GiB 4000.64 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Update Time : Mon Mar 11 11:30:05 2019
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Name : 33eac74a:data-1 (local to host 33eac74a)
UUID : cecbaa25:ae602552:29c26317:d86b6fdf
Events : 8770

Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
1 8 20 1 active sync /dev/sdb4
2 8 36 2 active sync /dev/sdc4

Message 9 of 16
Hopchen
Prodigy

Re: ReadyNAS Ultra-6 data degraded after firmware update

Hi @SLAM-ER 

 

Thanks for posting the mdstat log.

 

Firstly, let me clarify the behaviour of the NAS with the disk configuration that you have. The NAS will actually raid partitions together, not entire disks. So, when using different size disks as you do, the NAS will make a 2TB partition across all 6 disks and raid those partitions together in a raid 5. That will form one data raid - md126 in this instance.

md126 : active raid5 sdc3[8] sdf3[5] sde3[4] sdd3[3] sdb3[6]
9743324160 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU]

Next the NAS will take the 3 remaining larger disks and make 4TB partitions on each disk and raid those partitions together in a separate data raid. In this case, md127.

md127 : active raid5 sda4[0] sdc4[2] sdb4[1]
7813753856 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

Thereafter, the NAS sticks the two data raids together on the filesystem level in order to make it "one volume". So, what you are seeing with two data raids is perfectly normal when using different sized disks. @Sandshark - FYI this will be same configuration whether he factory defaults or not.

 

With that out of the way, we can see that md126 is degraded. The partition from sda (one of the disks) is missing in this raid.

md126 : active raid5 sdc3[8] sdf3[5] sde3[4] sdd3[3] sdb3[6] <<<=== "sda3" missing as a paticipant here.
9743324160 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU] <<<=== raid notifying you that one disk is out of this raid. 

This makes the md126 raid degraded - i.e. non-redundant anymore. Another disk failure will render the entire volume dead at this point so it needs to be addressed. There could be several reasons for the disk going missing in the md126 raid but a firmware update is not a likely suspect. What is more likely is that the "sda" disk has some dodgy sectors on the partition used for the md126 raid and thus the NAS might have kicked it from that raid upon boot.

 

What is the health of the disks overall? Can you post the output of the disk_info.log (masking any serial numbers for your disks)?

 

Thanks

 

 

Message 10 of 16
SLAM-ER
Aspirant

Re: ReadyNAS Ultra-6 data degraded after firmware update

Device: sda
Controller: 0
Channel: 0
Model: WDC WD60EFRX-68MYMN0
Serial: 
Firmware: 82.00A82
Class: SATA
RPM: 5700
Sectors: 11721045168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 11
Uncorrectable Sector Count: 0
Temperature: 32
Start/Stop Count: 136
Power-On Hours: 35247
Power Cycle Count: 77
Load Cycle Count: 15929

Device: sdb
Controller: 0
Channel: 1
Model: WDC WD60EFRX-68MYMN0
Serial: 
Firmware: 82.00A82
Class: SATA
RPM: 5700
Sectors: 11721045168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 35
Start/Stop Count: 105
Power-On Hours: 37416
Power Cycle Count: 75
Load Cycle Count: 20776

Device: sdc
Controller: 0
Channel: 2
Model: ST6000NM0115-1YZ110
Serial: 
Firmware: SN04
Class: SATA
RPM: 7200
Sectors: 11721045168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
End-to-End Errors: 0
Command Timeouts: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 37
Start/Stop Count: 16
Power-On Hours: 4434
Power Cycle Count: 6
Load Cycle Count: 5541

Device: sdd
Controller: 0
Channel: 3
Model: WDC WD20EFRX-68EUZN0
Serial: 
Firmware: 82.00A82
Class: SATA
RPM: 5400
Sectors: 3907029168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 28
Start/Stop Count: 220
Power-On Hours: 19423
Power Cycle Count: 13
Load Cycle Count: 1799

Device: sde
Controller: 0
Channel: 4
Model: WDC WD20EFRX-68EUZN0
Serial: 
Firmware: 82.00A82
Class: SATA
RPM: 5400
Sectors: 3907029168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 29
Start/Stop Count: 181
Power-On Hours: 19417
Power Cycle Count: 13
Load Cycle Count: 1794

Device: sdf
Controller: 0
Channel: 5
Model: WDC WD20EFRX-68EUZN0
Serial: 
Firmware: 82.00A82
Class: SATA
RPM: 5400
Sectors: 3907029168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 29
Start/Stop Count: 223
Power-On Hours: 19423
Power Cycle Count: 14
Load Cycle Count: 1869

 

Message 11 of 16
Hopchen
Prodigy

Re: ReadyNAS Ultra-6 data degraded after firmware update

Hi again

 

Thanks for posting the disk info. As suspected, disk sda has seen better days. From what I can see in the logs, the disk should be located in bay 1 (the first disk in the NAS). We can see that the disk has some Current Pending Sector errors.

Device: sda
Controller: 0
Channel: 0 <<<=== Bay 1
Model: WDC WD60EFRX-68MYMN0
Serial:
Firmware: 82.00A82
Class: SATA
RPM: 5700
Sectors: 11721045168
Pool: data
PoolType: RAID 5
PoolState: 3
PoolHostId: 33eac74a
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 11 <<<=== Bad sectors on the disk
Uncorrectable Sector Count: 0
Temperature: 32
Start/Stop Count: 136
Power-On Hours: 35247
Power Cycle Count: 77
Load Cycle Count: 15929


Pending sectors typically indicates imminent failure of the disk. An Acronis KB, describes the issue particularly well.

 

Current Pending Sector Count S.M.A.R.T. parameter is a critical parameter and indicates the current count of unstable sectors (waiting for remapping). The raw value of this attribute indicates the total number of sectors waiting for remapping. Later, when some of these sectors are read successfully, the value is decreased. If errors still occur when reading some sector, the hard drive will try to restore the data, transfer it to the reserved disk area (spare area) and mark this sector as remapped.

Please also consult your machines's or hard disks documentation.
Recommendations

This is a critical parameter. Degradation of this parameter may indicate imminent drive failure. Urgent data backup and hardware replacement is recommended.

https://kb.acronis.com/content/9133

 

It is quite likely that these bad sectors caused the NAS to kick the disk from the one of the data raids. The fact that those sectors appear stuck in "pending" is an indication that the sectors will probably never recover. Without further examination of the logs I'd say you need to replace that disk asap. Note: you must replace with a disk of same size or larger.

 

Your other disks appear to be healthy, which is good!


Cheers

 

 

Message 12 of 16
Sandshark
Sensei

Re: ReadyNAS Ultra-6 data degraded after firmware update

Since many users (myself included) rarely re-boot their NAS except when up-dating the OS, that re-boot can ben the cause of discovery of a problem that has actually persisted for a while.  Folks tend to blame the update, but that's rarely the root cause; it's just the catalyst for the requirement to reboot.  You are likely in this group; so it's not "coincidence" at all; it's a convergence of circumstance.

 

11 pending sectors is potentially a problem, and drive replacement seems in order.  Note that since this does add additional workload to the other old drives, insuring your backup is up to date before proceeding is good insurance.

Message 13 of 16
SLAM-ER
Aspirant

Re: ReadyNAS Ultra-6 data degraded after firmware update

Thanks for all the help guys, pity the web interface couldn't tell me this info, it seems the NAS has acces to all the data for it to do so...  😕  Even the LCD panel was flashing all the drives as degraded, surely it should only flash the faulty drive (disk 1) so the user knows what to replace.

 

I have backed it up, and will replace the faulty drive when I can afford it.  😄

 

Appreciate the assistance.

 

Message 14 of 16
StephenB
Guru

Re: ReadyNAS Ultra-6 data degraded after firmware update

It's the entire volume that is degraded, not a disk.

 

But I agree that Netgear needs to a better job of identifying which disk(s) have dropped out of the array in the web UI.  Forcing users to delve into mdstat.log, etc isn't great. 

 

Also, they should implement user-defined thresholds for disk health (pending sectors, etc).  11 pending sectors are well below their hard-wired thresholds, so you will just see a "green" healthy disk in the web ui.  IMO the hard-wired thresholds are too high (and based on posts here, Netgear support agrees that is the case.).

 

 

Message 15 of 16
Hopchen
Prodigy

Re: ReadyNAS Ultra-6 data degraded after firmware update


@StephenB wrote:

It's the entire volume that is degraded, not a disk.

 

But I agree that Netgear needs to a better job of identifying which disk(s) have dropped out of the array in the web UI.  Forcing users to delve into mdstat.log, etc isn't great. 

 

Also, they should implement user-defined thresholds for disk health (pending sectors, etc).  11 pending sectors are well below their hard-wired thresholds, so you will just see a "green" healthy disk in the web ui.  IMO the hard-wired thresholds are too high (and based on posts here, Netgear support agrees that is the case.).

 

 


I completely agree with this. It falls in the same category as the incredibly confusing message: "Remove Inactive Volumes". The challenge in this case was that sda dropped from md126 but it was still active in md127. Regardless, there should be an improved way of notifying the user other than the display blinking and showing: "Degraded". I think that user notification is an area where NETGEAR has a long way to go. It is fine if you know the system and can pull stats from logs, etc. However, for the average user, the front-end notifications are sub-par.

 

The disk error threshold is an interesting topic. NETGEAR has set a threshold for when to sent email notifications to the user: https://kb.netgear.com/30046/ReadyNAS-OS-6-Disk-Failure-Alerting

IMO, you cannot generalise disks errors to some arbitrary count as in, "it is only critical if you hit X mount of errors". Disks errors can hit different areas of the disk (some more important than others) and ATA errors can affect different aspects in how the disk operate.

 

They do need to re-think how they do this.

 

 

Message 16 of 16
Top Contributors
Discussion stats
  • 15 replies
  • 2865 views
  • 1 kudo
  • 4 in conversation
Announcements