× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

Protect2207
Aspirant

RN516+ Remove inactive volumes to use the disk (1 to 6)

Hello, 

 

I purchased new 8TB drives to replace my 6TB drives.

 

I proceeded with removing the first 6TB drive and plugged in a new 8TB drive. My nas started syncing this drive, taking up to 10h.

 

Issue is now, my little son came and pulled out the second drive during the sync.

 

Now is see the volume is inactive/dead and i see all my drives as being inactive dead.

 

I tried to reseat all drives, also by putting back the first original harddrive.

 

But nothing, after several reboots, i still always get the same message: remove inactive.. 1 to 6.

 

What to do?

 

Thank you

Model: RN51600|ReadyNAS 516 6-Bay Diskless
Message 1 of 21
StephenB
Guru

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)


@Protect2207 wrote:

 

Issue is now, my little son came and pulled out the second drive during the sync.

 

But nothing, after several reboots, i still always get the same message: remove inactive.. 1 to 6.

 


Options are 

  1. rebuild the NAS with the new drives all in place, reconfigure it, and restore data from backup.
  2. use ssh and attempt to force remounting of the volume
  3. contact Netgear paid support (my.netgear.com).  You might need data recovery. https://kb.netgear.com/69/ReadyNAS-Data-Recovery-Diagnostics-Scope-of-Service
  4. connect all 6 disks to a windows PC (likely requires a USB enclosure) and purchase ReclaiMe data recovery software.

Of course (1) assumes you have a backup.  (2) assumes working knowledge of linux commands and the btrfs file system.  You shouldn't attempt it if you don't already have that knowledge.  (3) and (4) are both potentially expensive - my guess is that (3) would be cheaper.

Message 2 of 21
Protect2207
Aspirant

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

Hi Stephen, 

 

Thank you for the quick reply.

 

So no backup of the complete volume, just the most important.

 

The device i bougth in march 2016 and i registered it, i have a 5 year warranty with it but when i connect on the netgear portal it says expired like 2 months after purchase..?

 

Basically i still have 2 years of support.. what to do to get this support.

 

I could connect the drives to my computer and install ext drivers but it would be a complete hassle. 

 

I guess with the support that i still should have, i would be able to restore some access..?

 

Thank you,

Message 3 of 21
Sandshark
Sensei

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

Warranty support does not include data recovery.  You do not have a hardware failure, so I think you'll find little to no warranty support for your problem, even if it doesn't require recovery.

 

If the slide lock on the drive eject mechanism (which many do not use) is not sufficient to keep your son from ejecting drives from the NAS, you do need to find another way.  If he'd removed two drives not during re-sync, you'd be in the same place.

 

In the future, step one in drive replacement is to update your backup.  Going through that many re-syncs is hard on the drives, and one that is nearing failure can be pushed over the edge by the process.

Message 4 of 21
StephenB
Guru

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)


@Protect2207 wrote:

 

The device i bougth in march 2016 and i registered it, i have a 5 year warranty with it but when i connect on the netgear portal it says expired like 2 months after purchase..?

 

Basically i still have 2 years of support.. what to do to get this support.

The system came with a 5 year warranty - but as @Sandshark says, this is not a warranty problem - nothing failed, the root cause of the issue amounts to user error.  Support is not the same as warranty.

 

The RN516 also came with a couple of months of phone support.  Systems purchased between 1 June 2014 and 31 May 2016 also should have lifetime chat support.  You could try that, but I don't think that will be enough to get your volume mounted.  If you can't activate the chat support, try sending a private message (PM) to @JohnCM_S or @Marc_V and see if they can help with that.  Send a PM by clicking on the envelope icon on the upper right of the forum page.

Message 5 of 21
Protect2207
Aspirant

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

Hi StephenB,

 

Thank you for the reply, i was finally able to find out some time to see if i can rebuild the volume.

 

So after i was able to ssh, i do see my 6 drives (5 x 6TB and 1 spare 8TB)

 

i see the 6 drives as RAID5 with the data i have when i execute this command:

 

sudo mdadm --examine /dev/sd[abcdef]3

Result:

     Raid Level : raid5
   Raid Devices : 6

 Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB)
     Array Size : 29278353920 (27922.01 GiB 29981.03 GB)
  Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=112 sectors
          State : clean
    Device UUID : 54320d15:ef5d8c95:aa7e9848:119283f6

    Update Time : Mon Oct 28 11:42:54 2019
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 8364a218 - correct
         Events : 4581

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 4
   Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102
           Name : 7c6e3768:data-0  (local to host 7c6e3768)
  Creation Time : Thu Feb  2 23:24:38 2017
     Raid Level : raid5
   Raid Devices : 6

 Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB)
     Array Size : 29278353920 (27922.01 GiB 29981.03 GB)
  Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=112 sectors
          State : clean
    Device UUID : ca39501b:4951c2f3:412fa3e9:dffed882

    Update Time : Mon Oct 28 11:42:54 2019
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 153dba4f - correct
         Events : 4581

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 5
   Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing)
root@PrOtEcT-NaS:/# ls /dev/md* 
/dev/md0  /dev/md1

/dev/md:
0  1
root@PrOtEcT-NaS:/# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md1 : active raid10 sdb2[0] sda2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1]
      1566720 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]
      
md0 : active raid1 sda1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[7]
      4190208 blocks super 1.2 [6/6] [UUUUUU]
      
unused devices: <none>


 but when i launch these commands:

 

ls /dev/md* 

Result:
/dev/md0 /dev/md1 /dev/md: 0 1


mdadm --assemble --scan
mdadm: /dev/md/data-0 assembled from 4 drives and 1 spare - not enough to start the array.
mdadm: No arrays found in config file or automatically

I only see /dev/md0 and md1 but i do not see /dev/md/data-0 which is my 27TB volume Raid5

 

the MD0 and MD1 are raid10 and 1 with only a few GB in it.

 

What should i do to see the /dev/md/data-0 to be able to force it?

 

For the chat support, i need to look into it also.

 

Thank you,

 

Message 6 of 21
Protect2207
Aspirant

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

Hi Stephen,

 

Thx for the reply, i was able to deep into the issue through CLI.

 

I found out that i do not see the volume i want to restore with the command

 

mdadm --assemble --scan (--force) 

 

result for this is, were it claims only to see 4 drives and 1 spare instead of 5 and 1 spare. it is that 1 missing drive in that array that messed up the volume data-0.

 

root@NaS:/etc# mdadm --assemble --scan
mdadm: /dev/md/data-0 assembled from 4 drives and 1 spare - not enough to start the array.
mdadm: No arrays found in config file or automaticall

 

I only see the volume /dev/md0 and md1 (raid1 and 10) but not dev/md/data-0 (Raid5)

 

but If i launch following command, i see all 6 drives, 5 x 6TB active and 1 x 8tb the new spare drive.

root@NaS:~# mdadm --examine /dev/sd[abcedef]3
/dev/sda3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x8
Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102
Name : 7c6e3768:data-0 (local to host 7c6e3768)
Creation Time : Thu Feb 2 23:24:38 2017
Raid Level : raid5
Raid Devices : 6
Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB)
Array Size : 29278353920 (27922.01 GiB 29981.03 GB)
Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=261864 sectors, after=112 sectors
State : clean
Device UUID : 274d2df3:12053c67:febc9b8f:3a6dccb8
Update Time : Mon Oct 28 11:37:01 2019
Bad Block Log : 512 entries available at offset 264 sectors - bad blocks present.
Checksum : 3c807cf5 - correct
Events : 4580
Layout : left-symmetric
Chunk Size : 64K
Device Role : spare
Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102
Name : 7c6e3768:data-0 (local to host 7c6e3768)
Creation Time : Thu Feb 2 23:24:38 2017
Raid Level : raid5
Raid Devices : 6
Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB)
Array Size : 29278353920 (27922.01 GiB 29981.03 GB)
Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=112 sectors
State : clean
Device UUID : 8f98f453:9227495e:ac314744:8d459beb
Update Time : Mon Oct 28 11:35:54 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 7bd126e8 - correct
Events : 511
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 1
Array State : AAAAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102
Name : 7c6e3768:data-0 (local to host 7c6e3768)
Creation Time : Thu Feb 2 23:24:38 2017
Raid Level : raid5
Raid Devices : 6
Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB)
Array Size : 29278353920 (27922.01 GiB 29981.03 GB)
Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=112 sectors
State : clean
Device UUID : b8421fce:19562dd9:770e73d8:6a6d6127
Update Time : Mon Oct 28 11:42:54 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 40d015cc - correct
Events : 4581
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102
Name : 7c6e3768:data-0 (local to host 7c6e3768)
Creation Time : Thu Feb 2 23:24:38 2017
Raid Level : raid5
Raid Devices : 6
Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB)
Array Size : 29278353920 (27922.01 GiB 29981.03 GB)
Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=112 sectors
State : clean
Device UUID : 71db0a51:af1298e1:f8ca6428:0debef05
Update Time : Mon Oct 28 11:42:54 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : faa6a53d - correct
Events : 4581
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 3
Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102
Name : 7c6e3768:data-0 (local to host 7c6e3768)
Creation Time : Thu Feb 2 23:24:38 2017
Raid Level : raid5
Raid Devices : 6
Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB)
Array Size : 29278353920 (27922.01 GiB 29981.03 GB)
Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=112 sectors
State : clean
Device UUID : 54320d15:ef5d8c95:aa7e9848:119283f6
Update Time : Mon Oct 28 11:42:54 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 8364a218 - correct
Events : 4581
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 4
Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 6585d56a:fa3e36a5:c08acda1:9ce37102
Name : 7c6e3768:data-0 (local to host 7c6e3768)
Creation Time : Thu Feb 2 23:24:38 2017
Raid Level : raid5
Raid Devices : 6
Avail Dev Size : 11711341680 (5584.40 GiB 5996.21 GB)
Array Size : 29278353920 (27922.01 GiB 29981.03 GB)
Used Dev Size : 11711341568 (5584.40 GiB 5996.21 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=112 sectors
State : clean
Device UUID : ca39501b:4951c2f3:412fa3e9:dffed882
Update Time : Mon Oct 28 11:42:54 2019
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 153dba4f - correct
Events : 4581
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 5
Array State : ..AAAA ('A' == active, '.' == missing, 'R' == replacing)

So what to do that when i launch this that i see 3 volumes and that i can restore data-0

 

root@NaS:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md1 : active raid10 sdb2[0] sda2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1]
      1566720 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]
      
md0 : active raid1 sda1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[7]
      4190208 blocks super 1.2 [6/6] [UUUUUU]
      
unused devices: <none>

Thank you

Message 7 of 21
Protect2207
Aspirant

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

4th time i post a reply but it gets deleted each time! Why?!!!
Message 8 of 21
StephenB
Guru

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)


@Protect2207 wrote:
4th time i post a reply but it gets deleted each time! Why?!!!

Something is triggering the spam filter - not sure what.  I've released the two most recent posts.

Message 9 of 21
Protect2207
Aspirant

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

Hi Stephen, 

 

Thank you for releasing the posts, i will consolidate both because they are similar yet i have added some extra data..

 

Meanwhile i have been thinking.

 

So i have the volume /dev/md/data-0 that i don't see.

 

This is composed of normally following drives.

 

/dev/sda3 SPARE
/dev/sdb3 Accidental removal missing in array
/dev/sdc3 active in array
/dev/sdd3 active in array
/dev/sde3 active in array
/dev/sdf3 active in array

 

As you can see the volume data-0 that i can't see when doing the command ' CAT /PROC/MDSTAT'.

 

This array should have 5 active and 1 spare but with "MDADM -ASSEMBLE --SCAN it sees only 4 drives and 1 spare which is not enough to mount it.

 

The missing drive in the array is following /dev/sdb3.

 

How can i add this to a volume/array i can't see..

 

I tried following to add but it says it does not know the volume /dev/md/data-0

 

root@NaS:~# mdadm --assemble --scan
mdadm: /dev/md/data-0 assembled from 4 drives and 1 spare - not enough to start the array.
mdadm: No arrays found in config file or automatically
root@NaS:~# mdadm --manage /dev/md/data-0  --add /dev/sdb3
mdadm: error opening /dev/md/data-0: No such file or directory

How can i handle this?

 

Thank you

Message 10 of 21
StephenB
Guru

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)


@Protect2207 wrote:

How can i handle this?

 


The command you are trying is for adding a new disk to a mounted array - so it won't work in this situation.  The normal way is to add --force /dev/sdb3 to your --assemble command.

 

There's an event counter on each drive that is used to determine if the array is in sync.  If the counter is off (doesn't match the other disks), then it won't automatically assemble.  That's likely the situation here.  That also means there will be some data corruption - hopefully slight.

Message 11 of 21
Protect2207
Aspirant

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

Hi StephenB,

 

i added this command line but it claimed it is not in the config file.

 

Where is this config file location so that i can add to it..

 

root@NaS:~# mdadm --assemble --scan --force /dev/sdb3
mdadm: /dev/sdb3 not identified in config file.
 
Message 12 of 21
StephenB
Guru

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

You moving into territory I haven't explored much. 

 

There is a config file in /etc/mdadm/mdadm.conf but it doesn't have what you need in it (at least in my system).

Message 13 of 21
Protect2207
Aspirant

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

Yep indeed it doesn't have much...

 

root@NaS:/etc/mdadm# cat mdadm.conf 
CREATE owner=root group=disk mode=0660 auto=yes

Online (google..) i see that others have much more... 

 

2. Compare that UUID with the one inside /etc/mdadm.conf:

# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid5 num-devices=6 metadata=0.90 spares=1 UUID=73560e25:92fb30cb:1c74ff07:ca1df0f7

Both UUID don’t actually match.

3. There is possibility to manually mount mdraid by giving each device as a part of md0 raid:

# mdadm --assemble /dev/md0 /dev/sdb /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
mdadm: /dev/md0 has been tarted with 6 drives.

Should i follow this solution, can i do anything wrong with above command?

 

source link:

https://www.thegeekdiary.com/mdadm-no-arrays-found-in-config-file-error-on-running-mdadm-assemble-sc...

 

Message 14 of 21
Protect2207
Aspirant

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

So i was finally able to see that my nas still sees the array /dev/md/data-0 when using following command, it has also a UUID.

 

root@PrOtEcT-NaS:/# mdadm --examine --scan
ARRAY /dev/md/0  metadata=1.2 UUID=c0633867:59ab0956:4351d87a:296a6dbf name=7c6e3768:0
ARRAY /dev/md/1  metadata=1.2 UUID=f17272ea:0d5eec53:2954b173:5dfdc72c name=7c6e3768:1
ARRAY /dev/md/data-0  metadata=1.2 UUID=6585d56a:fa3e36a5:c08acda1:9ce37102 name=7c6e3768:data-0
   spares=1

 

Should i add this to the mdadm.conf file..? so that when i try to execute following, it don't get the error that it doesn't find the volume/array..

 

mdadm --assemble --scan --force /dev/sdb3
mdadm: /dev/sdb3 not identified in config file.

 

Also when executing following command i see that the nas sees the volume with the 6 drives in it (5 drives + 1 spare-) but below it says 4 + 1 spare.

 

It is a question of some command lines to get it work again i am sure! but what..

 

root@NaS:/# mdadm --assemble --scan --verbose
mdadm: looking for devices for further assembly
mdadm: /dev/sdf3 is identified as a member of /dev/md/data-0, slot 5.
mdadm: /dev/sde3 is identified as a member of /dev/md/data-0, slot 4.
mdadm: /dev/sdd3 is identified as a member of /dev/md/data-0, slot 3.
mdadm: /dev/sdc3 is identified as a member of /dev/md/data-0, slot 2.
mdadm: /dev/sdb3 is identified as a member of /dev/md/data-0, slot 1.
mdadm: /dev/sda3 is identified as a member of /dev/md/data-0, slot -1.
mdadm: no uptodate device for slot 0 of /dev/md/data-0
mdadm: added /dev/sdb3 to /dev/md/data-0 as 1 (possibly out of date)
mdadm: added /dev/sdd3 to /dev/md/data-0 as 3
mdadm: added /dev/sde3 to /dev/md/data-0 as 4
mdadm: added /dev/sdf3 to /dev/md/data-0 as 5
mdadm: added /dev/sda3 to /dev/md/data-0 as -1
mdadm: added /dev/sdc3 to /dev/md/data-0 as 2
mdadm: /dev/md/data-0 assembled from 4 drives and 1 spare - not enough to start the array.
Message 15 of 21
StephenB
Guru

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

You have to get past the "possibly out of date" bit.

 

Maybe try --force -v on the assemble command w/o specifying a device?

Message 16 of 21
Protect2207
Aspirant

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

This is the reply, sdb3 event count is different, it suggest to use the dangerous --really-force

 

root@NaS:/# mdadm --assemble --scan --force -v
mdadm: looking for devices for further assembly
mdadm: no recogniseable superblock on /dev/md/1
mdadm: no recogniseable superblock on /dev/md/0
mdadm: /dev/sdf2 is busy - skipping
mdadm: /dev/sdf1 is busy - skipping
mdadm: No super block found on /dev/sdf (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdf
mdadm: /dev/sde2 is busy - skipping
mdadm: /dev/sde1 is busy - skipping
mdadm: No super block found on /dev/sde (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sde
mdadm: /dev/sdd2 is busy - skipping
mdadm: /dev/sdd1 is busy - skipping
mdadm: No super block found on /dev/sdd (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdd
mdadm: /dev/sdc2 is busy - skipping
mdadm: /dev/sdc1 is busy - skipping
mdadm: No super block found on /dev/sdc (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdc
mdadm: /dev/sdb2 is busy - skipping
mdadm: /dev/sdb1 is busy - skipping
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdb
mdadm: /dev/sda2 is busy - skipping
mdadm: /dev/sda1 is busy - skipping
mdadm: No super block found on /dev/sda (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sda
mdadm: /dev/sdf3 is identified as a member of /dev/md/data-0, slot 5.
mdadm: /dev/sde3 is identified as a member of /dev/md/data-0, slot 4.
mdadm: /dev/sdd3 is identified as a member of /dev/md/data-0, slot 3.
mdadm: /dev/sdc3 is identified as a member of /dev/md/data-0, slot 2.
mdadm: /dev/sdb3 is identified as a member of /dev/md/data-0, slot 1.
mdadm: /dev/sda3 is identified as a member of /dev/md/data-0, slot -1.
mdadm: NOT forcing event count in /dev/sdb3(1) from 511 up to 4581
mdadm: You can use --really-force to do that (DANGEROUS)
mdadm: no uptodate device for slot 0 of /dev/md/data-0
mdadm: added /dev/sdb3 to /dev/md/data-0 as 1 (possibly out of date)
mdadm: added /dev/sdd3 to /dev/md/data-0 as 3
mdadm: added /dev/sde3 to /dev/md/data-0 as 4
mdadm: added /dev/sdf3 to /dev/md/data-0 as 5
mdadm: added /dev/sda3 to /dev/md/data-0 as -1
mdadm: added /dev/sdc3 to /dev/md/data-0 as 2
mdadm: /dev/md/data-0 assembled from 4 drives and 1 spare - not enough to start the array.
mdadm: looking for devices for further assembly
mdadm: /dev/sdf2 is busy - skipping
mdadm: /dev/sdf1 is busy - skipping
mdadm: /dev/sde2 is busy - skipping
mdadm: /dev/sde1 is busy - skipping
mdadm: /dev/sdd2 is busy - skipping
mdadm: /dev/sdd1 is busy - skipping
mdadm: /dev/sdc2 is busy - skipping
mdadm: /dev/sdc1 is busy - skipping
mdadm: /dev/sdb2 is busy - skipping
mdadm: /dev/sdb1 is busy - skipping
mdadm: /dev/sda2 is busy - skipping
mdadm: /dev/sda1 is busy - skipping
mdadm: No arrays found in config file or automatically

if i use this really-force, will it make that it will see again 5 drives + 1 spare in the array /dev/md/data-0 and mount it?

 

Thank you, 

Message 17 of 21
StephenB
Guru

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)


@Protect2207 wrote:

This is the reply, sdb3 event count is different, it suggest to use the dangerous --really-force

if i use this really-force, will it make that it will see again 5 drives + 1 spare in the array /dev/md/data-0 and mount it?

 


Well, it is supposed to do that.  As I said above, the event count mismatch means that some data was written to the array after the drive was removed.  

 

mdadm: NOT forcing event count in /dev/sdb3(1) from 511 up to 4581

You are missing ~4000 write events, which is a really high number.  So there will likely be some file system corruption (e.g., data loss) - there's no easy way to predict how much.

 

 

Message 18 of 21
Protect2207
Aspirant

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

So i performed the cmd with "--really-force" and now the array sees 5 drives and 1 spare but claims not being able to still start an array.

 

 

root@NaS:/# mdadm --assemble --scan --really-force -v
mdadm: looking for devices for further assembly
mdadm: no recogniseable superblock on /dev/md/1
mdadm: no recogniseable superblock on /dev/md/0
mdadm: /dev/sdf2 is busy - skipping
mdadm: /dev/sdf1 is busy - skipping
mdadm: No super block found on /dev/sdf (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdf
mdadm: /dev/sde2 is busy - skipping
mdadm: /dev/sde1 is busy - skipping
mdadm: No super block found on /dev/sde (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sde
mdadm: /dev/sdd2 is busy - skipping
mdadm: /dev/sdd1 is busy - skipping
mdadm: No super block found on /dev/sdd (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdd
mdadm: /dev/sdc2 is busy - skipping
mdadm: /dev/sdc1 is busy - skipping
mdadm: No super block found on /dev/sdc (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdc
mdadm: /dev/sdb2 is busy - skipping
mdadm: /dev/sdb1 is busy - skipping
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sdb
mdadm: /dev/sda2 is busy - skipping
mdadm: /dev/sda1 is busy - skipping
mdadm: No super block found on /dev/sda (Expected magic a92b4efc, got 00000000)
mdadm: no RAID superblock on /dev/sda
mdadm: /dev/sdf3 is identified as a member of /dev/md/data-0, slot 5.
mdadm: /dev/sde3 is identified as a member of /dev/md/data-0, slot 4.
mdadm: /dev/sdd3 is identified as a member of /dev/md/data-0, slot 3.
mdadm: /dev/sdc3 is identified as a member of /dev/md/data-0, slot 2.
mdadm: /dev/sdb3 is identified as a member of /dev/md/data-0, slot 1.
mdadm: /dev/sda3 is identified as a member of /dev/md/data-0, slot -1.
mdadm: forcing event count in /dev/sdb3(1) from 511 upto 4581
mdadm: clearing FAULTY flag for device 5 in /dev/md/data-0 for /dev/sda3
mdadm: Marking array /dev/md/data-0 as 'clean'
mdadm: no uptodate device for slot 0 of /dev/md/data-0
mdadm: added /dev/sdc3 to /dev/md/data-0 as 2
mdadm: added /dev/sdd3 to /dev/md/data-0 as 3
mdadm: added /dev/sde3 to /dev/md/data-0 as 4
mdadm: added /dev/sdf3 to /dev/md/data-0 as 5
mdadm: added /dev/sda3 to /dev/md/data-0 as -1
mdadm: added /dev/sdb3 to /dev/md/data-0 as 1
mdadm: /dev/md/data-0 assembled from 5 drives and 1 spare - not enough to start the array.
mdadm: looking for devices for further assembly
mdadm: /dev/sdf2 is busy - skipping
mdadm: /dev/sdf1 is busy - skipping
mdadm: /dev/sde2 is busy - skipping
mdadm: /dev/sde1 is busy - skipping
mdadm: /dev/sdd2 is busy - skipping
mdadm: /dev/sdd1 is busy - skipping
mdadm: /dev/sdc2 is busy - skipping
mdadm: /dev/sdc1 is busy - skipping
mdadm: /dev/sdb2 is busy - skipping
mdadm: /dev/sdb1 is busy - skipping
mdadm: /dev/sda2 is busy - skipping
mdadm: /dev/sda1 is busy - skipping
mdadm: No arrays found in config file or automatically

 

After doing an assemble scan, it was able to start the volume making the /dev/md127 available so all my data 🙂

 

root@PNaS:/# mdadm --assemble --scan
mdadm: /dev/md/data-0 has been started with 5 drives (out of 6) and 1 spare.
root@NaS:/# cat /proc/mdstat Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md127 : active raid5 sdb3[1] sda3[6] sdf3[5] sde3[4] sdd3[3] sdc3[2] 29278353920 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU] [>....................] recovery = 0.0% (361472/5855670784) finish=3239.5min speed=30122K/sec md1 : active raid10 sdb2[0] sda2[5] sdf2[4] sde2[3] sdd2[2] sdc2[1] 1566720 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU] md0 : active raid1 sda1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[7] 4190208 blocks super 1.2 [6/6] [UUUUUU]

 

After login in on the GUI i still saw all disks in RED as being inactive, after a while they turned blue "active" but i see on the left a resyncing tasks and the volume being degraded. 

 

It has 8 hours to run, so in the afternoon i will be able to see if i suffered some dataloss... hopefully not or minimal.

 

Lessons learned keep it locked away, BACKED UP.

 

I have an extension box for the my RN516, the EDA500, can i run a second volume on this and make a full backup of my data on the nas? Or should i just invest in a secondary nas, put it on a second site and do some rsync between them..?

 

Thank you 🙂 🙂

 

 

 

Message 19 of 21
StephenB
Guru

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)


@Protect2207 wrote:

So i performed the cmd with "--really-force" and now the array sees 5 drives and 1 spare but claims not being able to still start an array.

 

After doing an assemble scan, it was able to start the volume making the /dev/md127 available so all my data 🙂

 

After login in on the GUI i still saw all disks in RED as being inactive, after a while they turned blue "active" but i see on the left a resyncing tasks and the volume being degraded. 

 

It has 8 hours to run, so in the afternoon i will be able to see if i suffered some dataloss... hopefully not or minimal.

 


Great news!  I'm glad I was able to help (and also am hoping there is no significant loss).

 


@Protect2207 wrote:

 

I have an extension box for the my RN516, the EDA500, can i run a second volume on this and make a full backup of my data on the nas? Or should i just invest in a secondary nas, put it on a second site and do some rsync between them..?

 


I prefer having backups on a completely different device if possible. NAS failures/issues can affect all volumes. That said, there is some value in keeping a backup on the EDA500 - that would have made recovery in this situation easier. You could use JBOD (one volume per disk) on the EDA500 volume(s) - giving up RAID redundancy in the backup, but simplifying recovery.  The EDA500 is very slow at resyncing, and using jbod works around that issue.  Using USB drives for backup is another local option.

 

Putting a NAS on a second site also gives you disaster recovery (theft, fire, lightning strike, etc).  The downside is that the backups will usually take longer (depending on internet service speeds), and it's harder to administer the remote NAS.  If you go that route, you should either use rsync over ssh (encrypted) or deploy some form of VPN to reach the remote site.  Also, don't forward HTTP (port 80) in the remote NAS, and use a strong admin password if you forward HTTPS (443) for remote access.

 

You could of course do both.

 

 

Message 20 of 21
Sandshark
Sensei

Re: RN516+ Remove inactive volumes to use the disk (1 to 6)

Using an EDA500 as a backup has one fluke:  If the main volume is destroyed and you have to re-create it, and the EDA volume was not Exported first, the EDA500 will not properly mount to the newly created OS.

 

There are two work-arounds:

 

Put the drives from the EDA into the main chassis and boot.  It will come up not knowing where the original main volume is, but will normally work (the OS volume RAID normally extends to the EDA500 drives).  Once it has done so, then you can Export the EDA volume and it will be ready to import into the new system.  Import is automatic at power-on.

 

Alternately, you can manuallt mount the EDA volume via SSH.  It will never show up in the GUI (at least, I've not figured out how), but you can copy files via SSH.

Message 21 of 21
Top Contributors
Discussion stats
  • 20 replies
  • 2602 views
  • 0 kudos
  • 3 in conversation
Announcements