NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Protect2207
Oct 28, 2019Aspirant
RN516+ Remove inactive volumes to use the disk (1 to 6)
Hello, I purchased new 8TB drives to replace my 6TB drives. I proceeded with removing the first 6TB drive and plugged in a new 8TB drive. My nas started syncing this drive, taking up to 10h....
StephenB
Oct 28, 2019Guru - Experienced User
Protect2207 wrote:
Issue is now, my little son came and pulled out the second drive during the sync.
But nothing, after several reboots, i still always get the same message: remove inactive.. 1 to 6.
Options are
- rebuild the NAS with the new drives all in place, reconfigure it, and restore data from backup.
- use ssh and attempt to force remounting of the volume
- contact Netgear paid support (my.netgear.com). You might need data recovery. https://kb.netgear.com/69/ReadyNAS-Data-Recovery-Diagnostics-Scope-of-Service
- connect all 6 disks to a windows PC (likely requires a USB enclosure) and purchase ReclaiMe data recovery software.
Of course (1) assumes you have a backup. (2) assumes working knowledge of linux commands and the btrfs file system. You shouldn't attempt it if you don't already have that knowledge. (3) and (4) are both potentially expensive - my guess is that (3) would be cheaper.
Protect2207
Oct 28, 2019Aspirant
Hi Stephen,
Thank you for the quick reply.
So no backup of the complete volume, just the most important.
The device i bougth in march 2016 and i registered it, i have a 5 year warranty with it but when i connect on the netgear portal it says expired like 2 months after purchase..?
Basically i still have 2 years of support.. what to do to get this support.
I could connect the drives to my computer and install ext drivers but it would be a complete hassle.
I guess with the support that i still should have, i would be able to restore some access..?
Thank you,
- SandsharkOct 28, 2019Sensei - Experienced User
Warranty support does not include data recovery. You do not have a hardware failure, so I think you'll find little to no warranty support for your problem, even if it doesn't require recovery.
If the slide lock on the drive eject mechanism (which many do not use) is not sufficient to keep your son from ejecting drives from the NAS, you do need to find another way. If he'd removed two drives not during re-sync, you'd be in the same place.
In the future, step one in drive replacement is to update your backup. Going through that many re-syncs is hard on the drives, and one that is nearing failure can be pushed over the edge by the process.
- StephenBOct 28, 2019Guru - Experienced User
Protect2207 wrote:
The device i bougth in march 2016 and i registered it, i have a 5 year warranty with it but when i connect on the netgear portal it says expired like 2 months after purchase..?
Basically i still have 2 years of support.. what to do to get this support.
The system came with a 5 year warranty - but as Sandshark says, this is not a warranty problem - nothing failed, the root cause of the issue amounts to user error. Support is not the same as warranty.
The RN516 also came with a couple of months of phone support. Systems purchased between 1 June 2014 and 31 May 2016 also should have lifetime chat support. You could try that, but I don't think that will be enough to get your volume mounted. If you can't activate the chat support, try sending a private message (PM) to JohnCM_S or Marc_V and see if they can help with that. Send a PM by clicking on the envelope icon on the upper right of the forum page.
- Protect2207Nov 04, 2019Aspirant
Hi StephenB,
Thank you for the reply, i was finally able to find out some time to see if i can rebuild the volume.
So after i was able to ssh, i do see my 6 drives (5 x 6TB and 1 spare 8TB)
i see the 6 drives as RAID5 with the data i have when i execute this command:
sudo mdadm --examine /dev/sd[abcdef]3
Result:Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB) Array Size : 29278353920 (27922.01 GiB 29981.03 GB) Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=112 sectors State : clean Device UUID : 54320d15:ef5d8c95:aa7e9848:119283f6 Update Time : Mon Oct 28 11:42:54 2019 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 8364a218 - correct Events : 4581 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 4 Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdf3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102 Name : 7c6e3768:data-0 (local to host 7c6e3768) Creation Time : Thu Feb 2 23:24:38 2017 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB) Array Size : 29278353920 (27922.01 GiB 29981.03 GB) Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=112 sectors State : clean Device UUID : ca39501b:4951c2f3:412fa3e9:dffed882 Update Time : Mon Oct 28 11:42:54 2019 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 153dba4f - correct Events : 4581 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 5 Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing) root@PrOtEcT-NaS:/# ls /dev/md* /dev/md0 /dev/md1 /dev/md: 0 1 root@PrOtEcT-NaS:/# cat /proc/mdstat Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid10 sdb2[0] sda2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] 1566720 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU] md0 : active raid1 sda1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[7] 4190208 blocks super 1.2 [6/6] [UUUUUU] unused devices: <none>
but when i launch these commands:
ls /dev/md*
Result:
/dev/md0 /dev/md1 /dev/md: 0 1mdadm --assemble --scan mdadm: /dev/md/data-0 assembled from 4 drives and 1 spare - not enough to start the array. mdadm: No arrays found in config file or automatically
I only see /dev/md0 and md1 but i do not see /dev/md/data-0 which is my 27TB volume Raid5
the MD0 and MD1 are raid10 and 1 with only a few GB in it.
What should i do to see the /dev/md/data-0 to be able to force it?
For the chat support, i need to look into it also.
Thank you,
- Protect2207Nov 04, 2019Aspirant
Hi Stephen,
Thx for the reply, i was able to deep into the issue through CLI.
I found out that i do not see the volume i want to restore with the command
mdadm --assemble --scan (--force)
result for this is, were it claims only to see 4 drives and 1 spare instead of 5 and 1 spare. it is that 1 missing drive in that array that messed up the volume data-0.
root@NaS:/etc# mdadm --assemble --scan mdadm: /dev/md/data-0 assembled from 4 drives and 1 spare - not enough to start the array. mdadm: No arrays found in config file or automaticall
I only see the volume /dev/md0 and md1 (raid1 and 10) but not dev/md/data-0 (Raid5)
but If i launch following command, i see all 6 drives, 5 x 6TB active and 1 x 8tb the new spare drive.
root@NaS:~# mdadm --examine /dev/sd[abcedef]3 /dev/sda3: Magic : a92b4efc Version : 1.2 Feature Map : 0x8 Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102 Name : 7c6e3768:data-0 (local to host 7c6e3768) Creation Time : Thu Feb 2 23:24:38 2017 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB) Array Size : 29278353920 (27922.01 GiB 29981.03 GB) Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=261864 sectors, after=112 sectors State : clean Device UUID : 274d2df3:12053c67:febc9b8f:3a6dccb8 Update Time : Mon Oct 28 11:37:01 2019 Bad Block Log : 512 entries available at offset 264 sectors - bad blocks present. Checksum : 3c807cf5 - correct Events : 4580 Layout : left-symmetric Chunk Size : 64K Device Role : spare Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdb3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102 Name : 7c6e3768:data-0 (local to host 7c6e3768) Creation Time : Thu Feb 2 23:24:38 2017 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB) Array Size : 29278353920 (27922.01 GiB 29981.03 GB) Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=112 sectors State : clean Device UUID : 8f98f453:9227495e:ac314744:8d459beb Update Time : Mon Oct 28 11:35:54 2019 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 7bd126e8 - correct Events : 511 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 1 Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdc3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102 Name : 7c6e3768:data-0 (local to host 7c6e3768) Creation Time : Thu Feb 2 23:24:38 2017 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB) Array Size : 29278353920 (27922.01 GiB 29981.03 GB) Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=112 sectors State : clean Device UUID : b8421fce:19562dd9:770e73d8:6a6d6127 Update Time : Mon Oct 28 11:42:54 2019 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 40d015cc - correct Events : 4581 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 2 Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdd3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102 Name : 7c6e3768:data-0 (local to host 7c6e3768) Creation Time : Thu Feb 2 23:24:38 2017 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB) Array Size : 29278353920 (27922.01 GiB 29981.03 GB) Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=112 sectors State : clean Device UUID : 71db0a51:af1298e1:f8ca6428:0debef05 Update Time : Mon Oct 28 11:42:54 2019 Bad Block Log : 512 entries available at offset 72 sectors Checksum : faa6a53d - correct Events : 4581 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 3 Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sde3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102 Name : 7c6e3768:data-0 (local to host 7c6e3768) Creation Time : Thu Feb 2 23:24:38 2017 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB) Array Size : 29278353920 (27922.01 GiB 29981.03 GB) Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=112 sectors State : clean Device UUID : 54320d15:ef5d8c95:aa7e9848:119283f6 Update Time : Mon Oct 28 11:42:54 2019 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 8364a218 - correct Events : 4581 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 4 Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdf3: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102 Name : 7c6e3768:data-0 (local to host 7c6e3768) Creation Time : Thu Feb 2 23:24:38 2017 Raid Level : raid5 Raid Devices : 6 Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB) Array Size : 29278353920 (27922.01 GiB 29981.03 GB) Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=112 sectors State : clean Device UUID : ca39501b:4951c2f3:412fa3e9:dffed882 Update Time : Mon Oct 28 11:42:54 2019 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 153dba4f - correct Events : 4581 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 5 Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing)
So what to do that when i launch this that i see 3 volumes and that i can restore data-0
root@NaS:~# cat /proc/mdstat Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid10 sdb2[0] sda2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] 1566720 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU] md0 : active raid1 sda1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[7] 4190208 blocks super 1.2 [6/6] [UUUUUU] unused devices: <none>
Thank you
- Protect2207Nov 04, 2019Aspirant4th time i post a reply but it gets deleted each time! Why?!!!
- StephenBNov 04, 2019Guru - Experienced User
Protect2207 wrote:
4th time i post a reply but it gets deleted each time! Why?!!!Something is triggering the spam filter - not sure what. I've released the two most recent posts.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!