NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Quinny1's avatar
Quinny1
Aspirant
Apr 09, 2015

Ultra 6 - Dead drive, Replaced but won't Rebuild #24999085

I have a ReadyNAS Ultra 6 in Raid 6 which was 98% full. RAIDiator 4.2.25

Wed Jan 7 12:56:48 NZDT 2015 Disk failure detected.
Wed Jan 7 12:57:01 NZDT 2015 ..... If this disk is a part of a RAID 6 volume, your volume is still protected if this is your first failure. A 2nd disk failure will make your volume unprotected. If this disk is a part of a RAID 10 volume,your volume is still protected if more than half of the disks alive. But another failure of disks been marked may render that volume dead. It is recommended that you replace the failed disk as soon as possible to maintain optimal protection of your volume.


Powered down system and left off as could not deal with it all due to unrelated pc issues at the same time :(

Finally brought new drive. Being umm "blonde" I turned the unit off to replace the drive. Spend Feb turning everything on and off to see if noticed drive. Nada.

Sun Mar 8 19:51:59 NZDT 2015 Data volume will be rebuilt with disk 4.
Sun Mar 8 19:50:20 NZDT 2015 System is up.


Waited 2 weeks nothing.
Tried again, waited again, nothing.
Researched found out should do as hotswap.

Sat Apr 4 19:38:11 NZDT 2015 New disk detected. If multiple disks have been added, they will be processed one at a time. Please do not remove any added disk(s) during this time. [Disk 4]
Sat Apr 4 19:37:14 NZDT 2015 A disk was removed from the ReadyNAS. For full protection of your data volume, please add a replacement disk as soon as possible.
Sat Apr 4 19:37:14 NZDT 2015 Disk removal detected. [Disk 4]


Saw new disk 4 was checked AND PASSED. Yay this time will work.
Nope nada.
Read every forum post on topic, read everything could find on web, tried options all over.
Found "could be" was too full. Moved 1tb off machine.

Wed Apr 8 20:45:57 NZST 2015 Data volume will be rebuilt with disk 4.
Wed Apr 8 20:43:46 NZST 2015 New disk detected. If multiple disks have been added, they will be processed one at a time. Please do not remove any added disk(s) during this time. [Disk 4]
Wed Apr 8 20:43:07 NZST 2015 A disk was removed from the ReadyNAS. For full protection of your data volume, please add a replacement disk as soon as possible.
Wed Apr 8 20:43:07 NZST 2015 Disk removal detected. [Disk 4]


And still nothing. I check in Volume, Volume as people with this rebuild not happening suggest and nope nothing happens. 3 months now and so over it. Anyone any ideas of what can try?

12 Replies

Replies have been turned off for this discussion
  • Email with details of what my friend Cg did

    Bit of a spam below, but these are the steps i took to fix it. MIght help someone on the forums.

    Parts with ### are my comments to explain what I did, the rest are console copy/pastes


    Enjoy ;-)
    cg



    ### GPT Partition table missing for new disk /dev/sdd;


    Disk /dev/sdd: 4294967295 sectors, 2047G
    Logical sector size: 512
    Disk identifier (GUID): 470beb69-b974-4199-a9e0-0027bdd134a7
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134


    Number  Start (sector)   End (sector)  Size    Code  Name
    QuinnyNAS:~#


    ### Clone the partition table from another OK drive, ie. /dev/sdf;
    ### sgdisk -R /dev/TARGET /dev/SOURCE;
    ### sgdisk -G /dev/sdd   to re GUID it




    QuinnyNAS:~# sgdisk -R /dev/sdd /dev/sdf
    The operation has completed successfully.
    QuinnyNAS:~# sgdisk -p /dev/sdd
    Disk /dev/sdd: 5860533168 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 3CCDEB15-2940-4482-9BD1-B84603AA8910
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134
    Partitions will be aligned on 64-sector boundaries
    Total free space is 4092 sectors (2.0 MiB)
    QuinnyNAS:~# sgdisk -G /dev/sdd
    The operation has completed successfully.

    QuinnyNAS:~# sgdisk -p /dev/sdd
    Disk /dev/sdd: 5860533168 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): E3936097-C73B-400B-A176-4C00C24D0927
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134
    Partitions will be aligned on 64-sector boundaries
    Total free space is 4092 sectors (2.0 MiB)


    Number  Start (sector)   End (sector)  Size    Code  Name

    Â Â 1 Â Â Â Â Â Â Â 64 Â Â Â Â 8388671 Â 4.0 GiB Â Â FD00
    Â Â 2 Â Â Â Â 8388672 Â Â Â Â 9437247 Â 512.0 MiB Â FD00
    Â Â 3 Â Â Â Â 9437248 Â Â Â 5860529072 Â 2.7 TiB Â Â FD00


    ### Confirm clone is matching other drive;


    QuinnyNAS:~# sgdisk -p /dev/sdf
    Disk /dev/sdf: 5860533168 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 3CCDEB15-2940-4482-9BD1-B84603AA8910
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134
    Partitions will be aligned on 64-sector boundaries
    Total free space is 4092 sectors (2.0 MiB)


    Number  Start (sector)   End (sector)  Size    Code  Name
    Â Â 1 Â Â Â Â Â Â Â 64 Â Â Â Â 8388671 Â 4.0 GiB Â Â FD00
    Â Â 2 Â Â Â Â 8388672 Â Â Â Â 9437247 Â 512.0 MiB Â FD00
    Â Â 3 Â Â Â Â 9437248 Â Â Â 5860529072 Â 2.7 TiB Â Â FD00


    ### Looks good - verify block devices are present now (sdd1,sdd2 previously missing);


    QuinnyNAS:~# ls -l /dev/sdd*
    brw-rw---- 1 root disk 8, 48 2015-04-17 20:21 /dev/sdd
    brw-rw---- 1 root disk 8, 49 2015-04-17 20:21 /dev/sdd1
    brw-rw---- 1 root disk 8, 58 2015-04-15 20:05 /dev/sdd10
    brw-rw---- 1 root disk 8, 59 2015-04-15 20:05 /dev/sdd11
    brw-rw---- 1 root disk 8, 60 2015-04-15 20:05 /dev/sdd12
    brw-rw---- 1 root disk 8, 61 2015-04-15 20:05 /dev/sdd13
    brw-rw---- 1 root disk 8, 62 2015-04-15 20:05 /dev/sdd14
    brw-rw---- 1 root disk 8, 63 2015-04-15 20:05 /dev/sdd15
    brw-rw---- 1 root disk 8, 50 2015-04-17 20:21 /dev/sdd2
    brw-rw---- 1 root disk 8, 51 2015-04-15 20:05 /dev/sdd3
    brw-rw---- 1 root disk 8, 52 2015-04-15 20:05 /dev/sdd4
    brw-rw---- 1 root disk 8, 53 2015-04-15 20:05 /dev/sdd5
    brw-rw---- 1 root disk 8, 54 2015-04-15 20:05 /dev/sdd6
    brw-rw---- 1 root disk 8, 55 2015-04-15 20:05 /dev/sdd7
    brw-rw---- 1 root disk 8, 56 2015-04-15 20:05 /dev/sdd8
    brw-rw---- 1 root disk 8, 57 2015-04-15 20:05 /dev/sdd9


    ### Check mdadm metadevices - /dev/md1 and /dev/md2 were missing a drive;


    QuinnyNAS:~# mdadm -D /dev/md1
    /dev/md1:
    Â Â Â Â Version : 1.2
    Â Creation Time : Wed May 23 08:19:46 2012
    Â Â Â Raid Level : raid6
    Â Â Â Array Size : 2096896 (2048.09 MiB 2147.22 MB)
    Â Used Dev Size : 524224 (512.02 MiB 536.81 MB)
    Â Â Raid Devices : 6
    Â Total Devices : 5
    Â Â Persistence : Superblock is persistent


    Â Â Update Time : Fri Apr 17 20:06:54 2015
    Â Â Â Â Â State : clean, degraded
    Â Active Devices : 5
    Working Devices : 5
    Â Failed Devices : 0
    Â Spare Devices : 0


    Â Â Â Â Â Layout : left-symmetric
    Â Â Â Chunk Size : 64K


    Â Â Â Â Â Â Name : 001F33EAD668:1
    Â Â Â Â Â Â UUID : 9ead4bd0:3fb129ca:61b328c4:17239b42
    Â Â Â Â Â Events : 6607


      Number  Major  Minor  RaidDevice State
        0    8     2     0    active sync  /dev/sda2
        1    8    18     1    active sync  /dev/sdb2
        2    8    34     2    active sync  /dev/sdc2
    Â Â Â Â 3 Â Â Â 0 Â Â Â Â 0 Â Â Â Â 3 Â Â Â removed
        4    8    66     4    active sync  /dev/sde2
        5    8    82     5    active sync  /dev/sdf2


    ### Add in new /dev/sdd2 into /dev/md1;


    QuinnyNAS:~# mdadm --manage /dev/md1 --add /dev/sdd2
    mdadm: added /dev/sdd2


    ### Check resync process;


    QuinnyNAS:~# cat /proc/mdstat
    Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : active raid6 sda3[0] sdf3[5] sde3[4] sdc3[2] sdb3[1]
    Â Â Â 11702178816 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/5] [UUU_UU]


    md1 : active raid6 sdd2[6] sda2[0] sdf2[5] sde2[4] sdc2[2] sdb2[1]
    Â Â Â 2096896 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/5] [UUU_UU]
    Â Â Â [====>................] Â recovery = 21.0% (110592/524224) finish=0.1min speed=36864K/sec


    md0 : active raid1 sda1[0] sdf1[5] sde1[4] sdc1[2] sdb1[1]
    Â Â Â 4193268 blocks super 1.2 [5/5] [UUUUU]


    unused devices: <none>


    ### Once mdstat is 100% above, check sync and State = Clean;


    QuinnyNAS:~# mdadm -D /dev/md1
    /dev/md1:
    Â Â Â Â Version : 1.2
    Â Creation Time : Wed May 23 08:19:46 2012
    Â Â Â Raid Level : raid6
    Â Â Â Array Size : 2096896 (2048.09 MiB 2147.22 MB)
    Â Used Dev Size : 524224 (512.02 MiB 536.81 MB)
    Â Â Raid Devices : 6
    Â Total Devices : 6
    Â Â Persistence : Superblock is persistent


    Â Â Update Time : Fri Apr 17 20:22:05 2015
    Â Â Â Â Â State : clean
    Â Active Devices : 6
    Working Devices : 6
    Â Failed Devices : 0
    Â Spare Devices : 0


    Â Â Â Â Â Layout : left-symmetric
    Â Â Â Chunk Size : 64K


    Â Â Â Â Â Â Name : 001F33EAD668:1
    Â Â Â Â Â Â UUID : 9ead4bd0:3fb129ca:61b328c4:17239b42
    Â Â Â Â Â Events : 6628


      Number  Major  Minor  RaidDevice State
        0    8     2     0    active sync  /dev/sda2
        1    8    18     1    active sync  /dev/sdb2
        2    8    34     2    active sync  /dev/sdc2
        6    8    50     3    active sync  /dev/sdd2
        4    8    66     4    active sync  /dev/sde2
        5    8    82     5    active sync  /dev/sdf2


    ### Check /dev/md2, missing drive too;


    QuinnyNAS:~# mdadm -D /dev/md2
    /dev/md2:
    Â Â Â Â Version : 1.2
    Â Creation Time : Wed May 23 08:19:46 2012
    Â Â Â Raid Level : raid6
    Â Â Â Array Size : 11702178816 (11160.07 GiB 11983.03 GB)
    Â Used Dev Size : 2925544704 (2790.02 GiB 2995.76 GB)
    Â Â Raid Devices : 6
    Â Total Devices : 5
    Â Â Persistence : Superblock is persistent


    Â Â Update Time : Fri Apr 17 20:22:53 2015
    Â Â Â Â Â State : clean, degraded
    Â Active Devices : 5
    Working Devices : 5
    Â Failed Devices : 0
    Â Spare Devices : 0


    Â Â Â Â Â Layout : left-symmetric
    Â Â Â Chunk Size : 64K


    Â Â Â Â Â Â Name : 001F33EAD668:2
    Â Â Â Â Â Â UUID : a51aa3c9:16e9bdd2:d928eae3:5d025c06
    Â Â Â Â Â Events : 317712


      Number  Major  Minor  RaidDevice State
        0    8     3     0    active sync  /dev/sda3
        1    8    19     1    active sync  /dev/sdb3
        2    8    35     2    active sync  /dev/sdc3
    Â Â Â Â 3 Â Â Â 0 Â Â Â Â 0 Â Â Â Â 3 Â Â Â removed
        4    8    67     4    active sync  /dev/sde3
        5    8    83     5    active sync  /dev/sdf3


    ### Add /dev/sdd3 to /dev/md2;


    QuinnyNAS:~# mdadm --manage /dev/md2 --add /dev/sdd3
    mdadm: added /dev/sdd3


    ### Again, check mdstat;


    QuinnyNAS:~# cat /proc/mdstat
    Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
    md2 : active raid6 sdd3[6] sda3[0] sdf3[5] sde3[4] sdc3[2] sdb3[1]
    Â Â Â 11702178816 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/5] [UUU_UU]
    Â Â Â [>....................] Â recovery = Â 0.0% (105728/2925544704) finish=8760.6min speed=5564K/sec


    ### Yay syncing!;


    QuinnyNAS:~# mdadm -D /dev/md2
    /dev/md2:
    Â Â Â Â Version : 1.2
    Â Creation Time : Wed May 23 08:19:46 2012
    Â Â Â Raid Level : raid6
    Â Â Â Array Size : 11702178816 (11160.07 GiB 11983.03 GB)
    Â Used Dev Size : 2925544704 (2790.02 GiB 2995.76 GB)
    Â Â Raid Devices : 6
    Â Total Devices : 6
    Â Â Persistence : Superblock is persistent


    Â Â Update Time : Fri Apr 17 20:31:24 2015
    Â Â Â Â Â State : clean, degraded, recovering
    Â Active Devices : 5
    Working Devices : 6
    Â Failed Devices : 0
    Â Spare Devices : 1


    Â Â Â Â Â Layout : left-symmetric
    Â Â Â Chunk Size : 64K


    Â Rebuild Status : 0% complete


    Â Â Â Â Â Â Name : 001F33EAD668:2
    Â Â Â Â Â Â UUID : a51aa3c9:16e9bdd2:d928eae3:5d025c06
    Â Â Â Â Â Events : 317800


      Number  Major  Minor  RaidDevice State
        0    8     3     0    active sync  /dev/sda3
        1    8    19     1    active sync  /dev/sdb3
        2    8    35     2    active sync  /dev/sdc3
        6    8    51     3    spare rebuilding  /dev/sdd3
        4    8    67     4    active sync  /dev/sde3
        5    8    83     5    active sync  /dev/sdf3


    ### Wait for resync to finish. You CAN turn the NAS off/on, but ideally you're best to leave it going.

  • Thanks Quinny, after browsing the forum for hours, I finally found this thread. Your friend's script has proven to be very useful for me, as I had exactly the same problem as you. Is there any way to thank you more?

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More