NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

JS78's avatar
JS78
Aspirant
Jan 02, 2017

ReadyNAS Pro 6 not rebuilding raid after entering new disk

Drive 6 failed on my setup of my ReadyNAS Pro 6 (on RAIDiator 4.2.30) .

Just entered a new drive of the same make and type drive into the slot. My frontview and log files show conflicting information as the logfile state the system is rebuilding whilst the status shows the system is non redundant and drive 6 has failed.

 

I looked on the system through SSH and looked at the syslog (this log file tells me NAS-HOME kernel:  sdf: unknown partition table)  and  the expand_md.log file (a portion is entered below)

 

[2017/01/02 12:58:53  2162] Boot, handle mutiple expand_md
[2017/01/02 12:58:54  2163] STAGE_CHECK: saved my_pid 2163 in check mode.
[2017/01/02 12:58:54  2163] RAID MODE: 1, sn=
[2017/01/02 12:58:54  2163] LINE 4591: exec command: lvs > /var/log/frontview/.V_E.snapshotstat
[2017/01/02 12:58:54  2163] Current file system status: offline
[2017/01/02 12:58:54  2163] LINE 5305: exec command: rm -fr /var/log/frontview/.V_E.*
[2017/01/02 12:58:54  2163] Running disk SMART quick self-test on new disk 6 [/dev/sdf]...
[2017/01/02 13:00:54  2163] PASSED
[2017/01/02 13:00:54  2163] LINE 1144: exec command: killall -HUP monitor_enclosure
[2017/01/02 13:00:55  2163] X_level: 5
[2017/01/02 13:00:55  2163]  /usr/sbin/expand_md
[2017/01/02 13:00:55  2163] MD degraded 1, bg_job 0, boot 1
[2017/01/02 13:00:55  2163] Read 1957072 byte from configuration
[2017/01/02 13:00:55  2163] Disk configuration matching with online configuration.
[2017/01/02 13:00:55  2163] May need to fix array.
[2017/01/02 13:00:55  2163] new_disk_pt_count: 1
[2017/01/02 13:00:55  2163] * new disk: /dev/sdf
[2017/01/02 13:00:55  2163] ===== Partition Entry (MD not used) =====
[2017/01/02 13:00:55  2163] 000 /dev/sdf,WD-WMAY00519462,8, 80, 1953514584 0 0 NO_MD 0
[2017/01/02 13:00:55  2163] partitions: 0 property: UNKNOWN
[2017/01/02 13:00:55  2163] ===== Partition Entry (Used by MD) =====
[2017/01/02 13:00:55  2163] 000 /dev/sda,WD-WMAY00561588,8, 0, 1953514584 1953512030 1953512030 MD_FULL 0
[2017/01/02 13:00:55  2163] partitions: 3 property: 4K
[2017/01/02 13:00:55  2163]     000 /dev/sda1, 8, 1, 4194304 MD_FULL md=/dev/md0
[2017/01/02 13:00:55  2163]     001 /dev/sda2, 8, 2, 524288 MD_FULL md=/dev/md1
[2017/01/02 13:00:55  2163]     002 /dev/sda5, 8, 5, 1948793438 MD_FULL md=/dev/md2
[2017/01/02 13:00:55  2163] 001 /dev/sdc,WD-WMAY00519101,8, 32, 1953514584 1953512030 1953512030 MD_FULL 0
[2017/01/02 13:00:55  2163] partitions: 3 property: 4K
[2017/01/02 13:00:55  2163]     000 /dev/sdc1, 8, 33, 4194304 MD_FULL md=/dev/md0
[2017/01/02 13:00:55  2163]     001 /dev/sdc2, 8, 34, 524288 MD_FULL md=/dev/md1
[2017/01/02 13:00:55  2163]     002 /dev/sdc5, 8, 37, 1948793438 MD_FULL md=/dev/md2
[2017/01/02 13:00:55  2163] 002 /dev/sde,WD-WMAY00446595,8, 64, 1953514584 1953512030 1953512030 MD_FULL 0
[2017/01/02 13:00:55  2163] partitions: 3 property: 1SECTOR
[2017/01/02 13:00:55  2163]     000 /dev/sde1, 8, 65, 4194304 MD_FULL md=/dev/md0
[2017/01/02 13:00:55  2163]     001 /dev/sde2, 8, 66, 524288 MD_FULL md=/dev/md1
[2017/01/02 13:00:55  2163]     002 /dev/sde5, 8, 69, 1948793438 MD_FULL md=/dev/md2
[2017/01/02 13:00:55  2163] 003 /dev/sdb,WD-WCC4E0420327,8, 16, 3907018584 3907016481 3907016481 MD_FULL 0
[2017/01/02 13:00:55  2163] partitions: 4 property: 4K
[2017/01/02 13:00:55  2163]     000 /dev/sdb1, 8, 17, 4194304 MD_FULL md=/dev/md0
[2017/01/02 13:00:55  2163]     001 /dev/sdb2, 8, 18, 524288 MD_FULL md=/dev/md1
[2017/01/02 13:00:55  2163]     002 /dev/sdb5, 8, 21, 1948793438 MD_FULL md=/dev/md2
[2017/01/02 13:00:55  2163]     003 /dev/sdb6, 8, 22, 1953504451 MD_FULL md=/dev/md3
[2017/01/02 13:00:55  2163] 004 /dev/sdd,WD-WCC4E0426359,8, 48, 3907018584 3907016481 3907016481 MD_FULL 0
[2017/01/02 13:00:55  2163] partitions: 4 property: 4K
[2017/01/02 13:00:55  2163]     000 /dev/sdd1, 8, 49, 4194304 MD_FULL md=/dev/md0
[2017/01/02 13:00:55  2163]     001 /dev/sdd2, 8, 50, 524288 MD_FULL md=/dev/md1
[2017/01/02 13:00:55  2163]     002 /dev/sdd5, 8, 53, 1948793438 MD_FULL md=/dev/md2
[2017/01/02 13:00:55  2163]     003 /dev/sdd6, 8, 54, 1953504451 MD_FULL md=/dev/md3
[2017/01/02 13:00:55  2163] in find_drive_... looking at /dev/md3
[2017/01/02 13:00:55  2163] in find_drive_... looking at /dev/md2
[2017/01/02 13:00:55  2163] found drive has the needed size for /dev/md2: /dev/sdd, 3907018584, 3907016481
[2017/01/02 13:00:55  2163] Pre-scan found no usable drives: 4/-1.
[2017/01/02 13:00:55  2163] Drive sn WD-WMAY00561588
[2017/01/02 13:00:55  2163] Drive sn WD-WMAY00561588 WD-WMAY00561588
[2017/01/02 13:00:55  2163] Take /dev/sda WD-WMAY00561588 away
[2017/01/02 13:00:55  2163] Drive sn WD-WMAY00519101
[2017/01/02 13:00:55  2163] Drive sn WD-WMAY00519101 WD-WMAY00519101
[2017/01/02 13:00:55  2163] Take /dev/sdc WD-WMAY00519101 away
[2017/01/02 13:00:55  2163] Drive sn WD-WMAY00446595
[2017/01/02 13:00:55  2163] Drive sn WD-WMAY00446595 WD-WMAY00446595
[2017/01/02 13:00:55  2163] Take /dev/sde WD-WMAY00446595 away
[2017/01/02 13:00:55  2163] Drive sn WD-WMAY00493613
[2017/01/02 13:00:55  2163] Drive WD-WMAY00493613 missing.
[2017/01/02 13:00:55  2163] Drive sn WD-WCC4E0420327
[2017/01/02 13:00:55  2163] Drive sn WD-WCC4E0420327 WD-WCC4E0420327
[2017/01/02 13:00:55  2163] Take /dev/sdb WD-WCC4E0420327 away

[2017/01/02 13:00:55  2163] Drive sn WD-WCC4E0426359
[2017/01/02 13:00:55  2163] Drive sn WD-WCC4E0426359 WD-WCC4E0426359
[2017/01/02 13:00:55  2163] Take /dev/sdd WD-WCC4E0426359 away
[2017/01/02 13:00:55  2163] Total missing: 1, index=3
[2017/01/02 13:00:55  2163] found missing drive: /dev/sdf WD-WMAY00493613 MD_NULL blocks=1953514584 1953512030 1953512030
[2017/01/02 13:00:55  2163] XRAID cfg_pt = , these two drives size close: 0 /dev/sdf 1953514584, 4 /dev/sdh 1953514584
[2017/01/02 13:00:55  2163] Changed drive selction: s=4/4, t=-1/0, file=/var/log/frontview/.known_cfgdir/
[2017/01/02 13:00:55  2163] Needed pt file /var/log/frontview/.known_cfgdir/ missing,
[2017/01/02 13:00:55  2163] Changed drive selction: s=4/4, t=-1/0, file=
[2017/01/02 13:00:55  2163] Repair md: use disk /dev/sdf, size 1953514584
[2017/01/02 13:00:58  2163] LINE 1505: exec command: sgdisk -Z /dev/sdf
Creating new GPT entries.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
[2017/01/02 13:01:01  2163] Update new added disk information: from /dev/sdi to /dev/sdf
[2017/01/02 13:01:01  2163] 000 /dev/sdf,WD-WMAY00519462,8, 80, 1953514584 0 0 NO_MD 0
[2017/01/02 13:01:01  2163] partitions: 0 property: UNKNOWN
[2017/01/02 13:01:01  2163] 000 /dev/sdi,,8, 128, 1953514584 1953513472 1953513472 NO_MD 1
[2017/01/02 13:01:01  2163] partitions: 1 property: UNKNOWN
[2017/01/02 13:01:01  2163]     000 /dev/sdi1, 8, 129, 1953513472 NO_MD md=
[2017/01/02 13:01:01  2163] 000 /dev/sdf,WD-WMAY00519462,8, 80, 1953514584 1953513472 1953513472 NO_MD 1
[2017/01/02 13:01:01  2163] partitions: 1 property: UNKNOWN
[2017/01/02 13:01:01  2163]     000 /dev/sdf1, 0, 0, 1953513472 NO_MD md=
[2017/01/02 13:01:01  2163] gpt sig:0,mbr sig:0,fake type:0
[2017/01/02 13:01:01  2163] get disk /dev/sdd format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2017/01/02 13:01:01  2163] LINE 7921: exec command: sgdisk -p /dev/sdd | grep '[0-9]    ' > /var/log/frontview/.V_E.source_pt.sav
[2017/01/02 13:01:02  2163] Get format of pt /var/log/frontview/.V_E.source_pt.sav
[2017/01/02 13:01:02  2163] pt is GPT format
[2017/01/02 13:01:02  2163] dump pt list to /dev/sdf root: 0
[2017/01/02 13:01:02  2163] index: 1 start=64 end=8388671 last_end=0
[2017/01/02 13:01:02  2163] index: 2 start=8388672 end=9437247 last_end=8388671
[2017/01/02 13:01:02  2163] index: 5 start=9437256 end=3907024131 last_end=9437247
[2017/01/02 13:01:02  2163] index: 6 start=3907024136 end=7814033038 last_end=3907024131
[2017/01/02 13:01:02  2163] LINE 8038: exec command: sgdisk -g -a 8   -n 1:64:8388671 -t 1:FD00 -n 2:8388672:9437247 -t 2:FD00 -n 5:9437256:3907024131 -t 5:FD00 -n 6:3907024136:7814033038 -t 6:FD00 /dev/sdf
Creating new GPT entries.
Could not create partition 5 from 3907024136 to 7814033038
Could not change partition 6's type code to FD00!
Error encountered; not saving changes.
[2017/01/02 13:01:03  2163] partition copy error /dev/sdf rc = 400
[2017/01/02 13:01:03  2163] Copy partition table FAIL rc=-3!!
[2017/01/02 13:01:03  2163] Failed to add disk to md
[2017/01/02 13:01:03  2163] Added drive to md, grown=0/0xff869778
[2017/01/02 13:01:03  2163] LINE 4852: exec command: /usr/sbin/expand_md -a super >> /var/log/frontview/expand_md.log 2>&1 &
[2017/01/02 13:01:03  2163] LINE 4855: exec command: /frontview/bin/volumescan &

[2017/01/02 13:01:03  2163] LINE 4939: exec command: ps -ef | grep expand_md | grep -v grep > /var/log/frontview/.V_E.snapshotstat
[2017/01/02 13:01:04  2163] STAGE_WIPE: Clean my_pid 2163
[2017/01/02 13:02:05  4622] STAGE_CHECK: saved my_pid 4622 in check mode.
[2017/01/02 13:02:05  4622] RAID MODE: 1, sn=
[2017/01/02 13:02:05  4622] LINE 4591: exec command: lvs > /var/log/frontview/.V_E.snapshotstat
[2017/01/02 13:02:05  4622] Current file system status: ext4
[2017/01/02 13:02:05  4622] LINE 5305: exec command: rm -fr /var/log/frontview/.V_E.*
[2017/01/02 13:02:05  4622] Running disk SMART quick self-test on new disk 6 [/dev/sdf]...
[2017/01/02 13:04:06  4622] PASSED
[2017/01/02 13:04:06  4622] LINE 1144: exec command: killall -HUP monitor_enclosure
[2017/01/02 13:04:07  4622] X_level: 5
[2017/01/02 13:04:07  4622]  /usr/sbin/expand_md -a super
[2017/01/02 13:04:07  4622] MD degraded 1, bg_job 0, boot 0
[2017/01/02 13:04:07  4622] STAGE_WIPE: Clean my_pid 4622

 

Tried to do manual zapping of the partition and rebooted, that did not work. I could do with some help here how to get the system running properly again.

 

Thanks in advance for all of your wisdom and support

 

32 Replies

Replies have been turned off for this discussion
  • StephenB's avatar
    StephenB
    Guru - Experienced User

    Perhaps check the disk status with RAIDar as well.

     

    I think I'd give it some time to complete the rebuild (which hopefully is happening) before pulling the drive/rebooting again.  Then see if the disk status becomes consistent.

    • JS78's avatar
      JS78
      Aspirant

      2-1-2017 14-03-44.jpg

      Here's what RAIDar says. However you had a similar case on the forum a while ago, cannot find the post anymore and that did not work either. Because it seems from the log file the process just stops. What process do I need to look for with PS to see if the system is still rebuilding?

      • StephenB's avatar
        StephenB
        Guru - Experienced User

        Maybe look in mdconfig.log in the log zip

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More