NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
lfrye
Jul 19, 2019Aspirant
ReadyNAS RN3220 Volume is Degraded but all green lights & no errors on disks
Hello, I'll start by stating I do not know what this "*Select Location" thing is while creating this discussion, so I apoligize if I used the wrong "Location". Here is the issue: I have a Ne...
StephenB
Jul 23, 2019Guru - Experienced User
Replacing it is probably the best option.
JohnCM_S
Jul 23, 2019NETGEAR Employee Retired
- lfryeJul 23, 2019Aspirant
Thank you again John for all the support. I truly appreciate it.
Thank you,
Leif
- JohnCM_SJul 23, 2019NETGEAR Employee Retired
Hi lfrye,
You are welcome. I am glad we could help. :)
Regards,
- lfryeJul 30, 2019Aspirant
Hi John,
We replaced the drive but it is not rebuilding. How do I make it rebuild the disk? It currently says the state is healthy but has been stuck on "Balancing in progress 0% complete" since we replaced the drive 4 days ago.
Thank you,
Leif
- StephenBJul 30, 2019Guru - Experienced User
lfrye wrote:
We replaced the drive but it is not rebuilding. How do I make it rebuild the disk? It currently says the state is healthy but has been stuck on "Balancing in progress 0% complete" since we replaced the drive 4 days ago.
Odd. Did you start the balance before you replaced the drive?
Can you download the log zip file and post mdstat.log (copy/paste it into a reply).
- lfryeJul 30, 2019Aspirant
Sure can, please see below from mdstat.log: (Edit: balance was done after neww drive was installed)
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active raid6 sdj3[9] sde3[0] sda3[8] sdb3[5] sdc3[6] sdd3[7] sdi3[3] sdh3[2]
17552488704 blocks super 1.2 level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
md1 : active raid6 sde2[0] sdc2[6] sdd2[5] sdb2[4] sdh2[3] sdi2[2]
2616320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/6] [U_UUUUU]
md0 : active raid1 sdj1[9] sde1[0] sda1[8] sdb1[5] sdc1[6] sdd1[7] sdi1[3] sdh1[2]
4190208 blocks super 1.2 [8/8] [UUUUUUUU]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Thu Jun 5 23:12:24 2014
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 8
Total Devices : 8
Persistence : Superblock is persistentUpdate Time : Tue Jul 30 13:47:23 2019
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0Name : 7c6ed9ac:0 (local to host 7c6ed9ac)
UUID : 361ae6a0:cb42991e:df5b94fb:3de50d5a
Events : 3021Number Major Minor RaidDevice State
0 8 65 0 active sync /dev/sde1
9 8 145 1 active sync /dev/sdj1
2 8 113 2 active sync /dev/sdh1
3 8 129 3 active sync /dev/sdi1
7 8 49 4 active sync /dev/sdd1
6 8 33 5 active sync /dev/sdc1
5 8 17 6 active sync /dev/sdb1
8 8 1 7 active sync /dev/sda1
/dev/md/1:
Version : 1.2
Creation Time : Tue Mar 13 18:36:51 2018
Raid Level : raid6
Array Size : 2616320 (2.50 GiB 2.68 GB)
Used Dev Size : 523264 (511.00 MiB 535.82 MB)
Raid Devices : 7
Total Devices : 6
Persistence : Superblock is persistentUpdate Time : Tue Jul 30 13:46:52 2019
State : clean, degraded
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 512KName : 7c6ed9ac:1 (local to host 7c6ed9ac)
UUID : 3ff1d12b:7b93ca9f:4d63b3c7:5ca60a6c
Events : 2261Number Major Minor RaidDevice State
0 8 66 0 active sync /dev/sde2
- 0 0 1 removed
2 8 130 2 active sync /dev/sdi2
3 8 114 3 active sync /dev/sdh2
4 8 18 4 active sync /dev/sdb2
5 8 50 5 active sync /dev/sdd2
6 8 34 6 active sync /dev/sdc2
/dev/md/data-0:
Version : 1.2
Creation Time : Thu Jun 5 23:12:24 2014
Raid Level : raid6
Array Size : 17552488704 (16739.36 GiB 17973.75 GB)
Used Dev Size : 2925414784 (2789.89 GiB 2995.62 GB)
Raid Devices : 8
Total Devices : 8
Persistence : Superblock is persistentUpdate Time : Sat Jul 27 17:52:18 2019
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 64KName : 7c6ed9ac:data-0 (local to host 7c6ed9ac)
UUID : 0dddc1d7:3cce3178:52db493d:44f5e2ab
Events : 130974Number Major Minor RaidDevice State
0 8 67 0 active sync /dev/sde3
9 8 147 1 active sync /dev/sdj3
2 8 115 2 active sync /dev/sdh3
3 8 131 3 active sync /dev/sdi3
7 8 51 4 active sync /dev/sdd3
6 8 35 5 active sync /dev/sdc3
5 8 19 6 active sync /dev/sdb3
8 8 3 7 active sync /dev/sda3 - lfryeJul 30, 2019Aspirant
Balance was done by my on site tech after he installed the new drive.
Here are the logs from mdstat.log: (Edit: I do not see the drive sdf which is the one that we replaced)
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active raid6 sdj3[9] sde3[0] sda3[8] sdb3[5] sdc3[6] sdd3[7] sdi3[3] sdh3[2]
17552488704 blocks super 1.2 level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
md1 : active raid6 sde2[0] sdc2[6] sdd2[5] sdb2[4] sdh2[3] sdi2[2]
2616320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/6] [U_UUUUU]
md0 : active raid1 sdj1[9] sde1[0] sda1[8] sdb1[5] sdc1[6] sdd1[7] sdi1[3] sdh1[2]
4190208 blocks super 1.2 [8/8] [UUUUUUUU]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Thu Jun 5 23:12:24 2014
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 8
Total Devices : 8
Persistence : Superblock is persistentUpdate Time : Tue Jul 30 13:47:23 2019
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0Name : 7c6ed9ac:0 (local to host 7c6ed9ac)
UUID : 361ae6a0:cb42991e:df5b94fb:3de50d5a
Events : 3021Number Major Minor RaidDevice State
0 8 65 0 active sync /dev/sde1
9 8 145 1 active sync /dev/sdj1
2 8 113 2 active sync /dev/sdh1
3 8 129 3 active sync /dev/sdi1
7 8 49 4 active sync /dev/sdd1
6 8 33 5 active sync /dev/sdc1
5 8 17 6 active sync /dev/sdb1
8 8 1 7 active sync /dev/sda1
/dev/md/1:
Version : 1.2
Creation Time : Tue Mar 13 18:36:51 2018
Raid Level : raid6
Array Size : 2616320 (2.50 GiB 2.68 GB)
Used Dev Size : 523264 (511.00 MiB 535.82 MB)
Raid Devices : 7
Total Devices : 6
Persistence : Superblock is persistentUpdate Time : Tue Jul 30 13:46:52 2019
State : clean, degraded
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 512KName : 7c6ed9ac:1 (local to host 7c6ed9ac)
UUID : 3ff1d12b:7b93ca9f:4d63b3c7:5ca60a6c
Events : 2261Number Major Minor RaidDevice State
0 8 66 0 active sync /dev/sde2
- 0 0 1 removed
2 8 130 2 active sync /dev/sdi2
3 8 114 3 active sync /dev/sdh2
4 8 18 4 active sync /dev/sdb2
5 8 50 5 active sync /dev/sdd2
6 8 34 6 active sync /dev/sdc2
/dev/md/data-0:
Version : 1.2
Creation Time : Thu Jun 5 23:12:24 2014
Raid Level : raid6
Array Size : 17552488704 (16739.36 GiB 17973.75 GB)
Used Dev Size : 2925414784 (2789.89 GiB 2995.62 GB)
Raid Devices : 8
Total Devices : 8
Persistence : Superblock is persistentUpdate Time : Sat Jul 27 17:52:18 2019
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 64KName : 7c6ed9ac:data-0 (local to host 7c6ed9ac)
UUID : 0dddc1d7:3cce3178:52db493d:44f5e2ab
Events : 130974Number Major Minor RaidDevice State
0 8 67 0 active sync /dev/sde3
9 8 147 1 active sync /dev/sdj3
2 8 115 2 active sync /dev/sdh3
3 8 131 3 active sync /dev/sdi3
7 8 51 4 active sync /dev/sdd3
6 8 35 5 active sync /dev/sdc3
5 8 19 6 active sync /dev/sdb3
8 8 3 7 active sync /dev/sda3 - lfryeJul 30, 2019Aspirant
I was getting errors when trying to download the logs so management I told me to reboot the NAS. Just before it rebooted I received email notification saying the balancing had completed then the device rebooted. When it came back online 10 mins later all the drives were Blue and health was normal.
I really should have just rebooted the NAS before coming here, but I was kinda worried I would lose the array.
Thank you StephenB for your help!
- StephenBJul 31, 2019Guru - Experienced User
lfrye wrote:
I was getting errors when trying to download the logs so management I told me to reboot the NAS. Just before it rebooted I received email notification saying the balancing had completed then the device rebooted. When it came back online 10 mins later all the drives were Blue and health was normal.
So the volume is rebuilt now?
You can confirm that by looking at mdstat.log in the log zip file.
- StephenBAug 05, 2019Guru - Experienced User
This was caught in the spam filter for a while, so I just saw it.
md1 is the degraded volume (sdj is missing)
lfrye wrote:
md1 : active raid6 sde2[0] sdc2[6] sdd2[5] sdb2[4] sdh2[3] sdi2[2]
2616320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/6] [U_UUUUU]md1 is the swap partition, it isn't part of your array. But still it should be fixed.
I suggest testing sdj in a windows PC (using lifeguard for Western digital, seatools for Seagate). After the long read test, you can also run the destructive write-zeros test. That will pick up issues that are missed by the read-test - and it will also unformat the drive (which is needed before you try adding it back to the array).
FWIW, it is possible to manually add sdj to this md1 via ssh (and paid support could do this for you). But you should test the disk anyway, since we don't know what failed the first time.
- lfryeAug 05, 2019Aspirant
Hi StephenB,
There is not a drive with sdj labeling. I have drives labeled sda-sdi.
After rebooting the NAS device the RAID 6 rebuilt and appears good.
Thank you for your follow up,
Leif
- StephenBAug 05, 2019Guru - Experienced User
lfrye wrote:
There is not a drive with sdj labeling. I have drives labeled sda-sdi.
After rebooting the NAS device the RAID 6 rebuilt and appears good.
Likely the drive got relabeled when you rebooted.
Get a fresh set of logs, and look at mdstat again. Make sure that md1 lends with algorithm 2 [8/8] [UUUUUUUU] - with no _ in the [U...] part.
- lfryeAug 05, 2019Aspirant
No underscores in the U strings. See below:
md1 : active raid10 sda2[7] sdb2[6] sdi2[5] sdc2[4] sdd2[3] sdh2[2] sdf2[1] sde2[0]
2093056 blocks super 1.2 512K chunks 2 near-copies [8/8] [UUUUUUUU]
md127 : active raid6 sde3[0] sda3[8] sdb3[5] sdc3[6] sdd3[7] sdi3[3] sdh3[2] sdf3[9]
17552488704 blocks super 1.2 level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
md0 : active raid1 sde1[0] sda1[8] sdb1[5] sdc1[6] sdd1[7] sdi1[3] sdh1[2] sdf1[9]
4190208 blocks super 1.2 [8/8] [UUUUUUUU] - StephenBAug 05, 2019Guru - Experienced User
Great - everything looks fine.
- lfryeAug 05, 2019Aspirant
Thank you StephenB for your follow though. I truly appreciate it.
Thank you,
Leif
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!