NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
CarlEdman
Jan 28, 2016Luminary
Ultra 6 Replacement Drive Not Syncing
I've been running a ReadyNAS Ultra 6 Plus (latest official firmware) for some six years (and an NV+ for years before that), so when I got messages that one of the six 2-TByte drives (Seagate ST2000DL...
- Jul 14, 2016
Just came across this thread again and thought I'd give those who have been following these travails with rapt attention some closure: Six months later, the system runs fine under OS 6 without any more disk troubles.
CarlEdman
Jan 28, 2016Luminary
Thanks. I knew that drives which already are partitioned may not be promptly recognized. It is for that reason that I deleted all partitions on the PC before returning it to the ReadyNAS. Also, if I interpret the sgdisk output correctly, ReadyNAS does not see any pre-existing partitions on the replacement drive (/dev/sdb) either.
CarlEdman
Jan 28, 2016Luminary
For lack of any guidance, I zapped the partition table on the new drive (sgdisk -Z /dev/sdb) and rebooted. On boot, the front panel indicated "testing disk 2" for a few minutes, followed by "disk 2 passed" But I can see now indication that the sync is actually taking place. Is there any log/etc I can look at using ssh that would give me some clue about what, if anything, is happening. All the relevant lines from /var/log/syslog were in the original post.
- StephenBJan 28, 2016Guru - Experienced User
There is an expansion log (expand_md.log in /var/log/frontview).
If you hover your mouse of the disk "ball" icons in frontview or RAIDar, you also get some status.
- CarlEdmanJan 28, 2016Luminary
Thanks, StephenB. After the 'sgdisk -Z' and reboot, I think the expansion actually has started. The NAS emailed me "Data volume will be rebuilt with disk 2." The volume and disk balls in frontview are blinking yellow.
The expansion log follows. I *think* that means that expansion is proceeding, but only waiting 24 hours will tell.
[2016/01/28 17:28:40 2062] Boot, handle mutiple expand_md [2016/01/28 17:28:41 2063] STAGE_CHECK: saved my_pid 2063 in check mode. [2016/01/28 17:28:41 2063] RAID MODE: 1, sn= [2016/01/28 17:28:41 2063] LINE 4591: exec command: lvs > /var/log/frontview/.V_E.snapshotstat [2016/01/28 17:28:41 2063] Current file system status: offline [2016/01/28 17:28:41 2063] LINE 5305: exec command: rm -fr /var/log/frontview/.V_E.* [2016/01/28 17:28:41 2063] Running disk SMART quick self-test on new disk 2 [/dev/sdb]... [2016/01/28 17:29:42 2063] PASSED [2016/01/28 17:29:42 2063] LINE 1144: exec command: killall -HUP monitor_enclosure [2016/01/28 17:29:43 2063] X_level: 5 [2016/01/28 17:29:43 2063] /usr/sbin/expand_md [2016/01/28 17:29:43 2063] MD degraded 1, bg_job 0, boot 1 [2016/01/28 17:29:43 2063] Read 1957072 byte from configuration [2016/01/28 17:29:43 2063] Disk configuration matching with online configuration. [2016/01/28 17:29:43 2063] May need to fix array. [2016/01/28 17:29:43 2063] new_disk_pt_count: 1 [2016/01/28 17:29:43 2063] * new disk: /dev/sdb [2016/01/28 17:29:43 2063] ===== Partition Entry (MD not used) ===== [2016/01/28 17:29:43 2063] 000 /dev/sdb,S30126C4,8, 16, 3907018584 0 0 NO_MD 0 [2016/01/28 17:29:43 2063] partitions: 0 property: UNKNOWN [2016/01/28 17:29:43 2063] ===== Partition Entry (Used by MD) ===== [2016/01/28 17:29:43 2063] 000 /dev/sda,5YD1AWGL,8, 0, 1953514584 1953511996 1953511996 MD_FULL 0 [2016/01/28 17:29:43 2063] partitions: 3 property: 4K [2016/01/28 17:29:43 2063] 000 /dev/sda1, 8, 1, 4194304 MD_FULL md=/dev/md0 [2016/01/28 17:29:43 2063] 001 /dev/sda2, 8, 2, 524288 MD_FULL md=/dev/md1 [2016/01/28 17:29:43 2063] 002 /dev/sda5, 8, 5, 1948793404 MD_FULL md=/dev/md2 [2016/01/28 17:29:43 2063] 001 /dev/sdc,5YD1AWGR,8, 32, 1953514584 1953511996 1953511996 MD_FULL 0 [2016/01/28 17:29:43 2063] partitions: 3 property: 4K [2016/01/28 17:29:43 2063] 000 /dev/sdc1, 8, 33, 4194304 MD_FULL md=/dev/md0 [2016/01/28 17:29:43 2063] 001 /dev/sdc2, 8, 34, 524288 MD_FULL md=/dev/md1 [2016/01/28 17:29:43 2063] 002 /dev/sdc5, 8, 37, 1948793404 MD_FULL md=/dev/md2 [2016/01/28 17:29:43 2063] 002 /dev/sdd,5YD17Q6F,8, 48, 1953514584 1953511996 1953511996 MD_FULL 0 [2016/01/28 17:29:43 2063] partitions: 3 property: 4K [2016/01/28 17:29:43 2063] 000 /dev/sdd1, 8, 49, 4194304 MD_FULL md=/dev/md0 [2016/01/28 17:29:43 2063] 001 /dev/sdd2, 8, 50, 524288 MD_FULL md=/dev/md1 [2016/01/28 17:29:43 2063] 002 /dev/sdd5, 8, 53, 1948793404 MD_FULL md=/dev/md2 [2016/01/28 17:29:43 2063] 003 /dev/sde,5YD19PC1,8, 64, 1953514584 1953511996 1953511996 MD_FULL 0 [2016/01/28 17:29:43 2063] partitions: 3 property: 4K [2016/01/28 17:29:43 2063] 000 /dev/sde1, 8, 65, 4194304 MD_FULL md=/dev/md0 [2016/01/28 17:29:43 2063] 001 /dev/sde2, 8, 66, 524288 MD_FULL md=/dev/md1 [2016/01/28 17:29:43 2063] 002 /dev/sde5, 8, 69, 1948793404 MD_FULL md=/dev/md2 [2016/01/28 17:29:43 2063] 004 /dev/sdf,Z340PDXM,8, 80, 1953514584 1953511996 1953511996 MD_FULL 0 [2016/01/28 17:29:43 2063] partitions: 3 property: 4K [2016/01/28 17:29:43 2063] 000 /dev/sdf1, 8, 81, 4194304 MD_FULL md=/dev/md0 [2016/01/28 17:29:43 2063] 001 /dev/sdf2, 8, 82, 524288 MD_FULL md=/dev/md1 [2016/01/28 17:29:43 2063] 002 /dev/sdf5, 8, 85, 1948793404 MD_FULL md=/dev/md2 [2016/01/28 17:29:43 2063] in find_drive_... looking at /dev/md2 [2016/01/28 17:29:43 2063] found drive has the needed size for /dev/md2: /dev/sdf, 1953514584, 1953511996 [2016/01/28 17:29:43 2063] Drive sn 5YD1AWGL [2016/01/28 17:29:43 2063] Drive sn 5YD1AWGL 5YD1AWGL [2016/01/28 17:29:43 2063] Take /dev/sda 5YD1AWGL away [2016/01/28 17:29:43 2063] Drive sn 5YD17DPM [2016/01/28 17:29:43 2063] Drive 5YD17DPM missing. [2016/01/28 17:29:43 2063] Drive sn 5YD1AWGR [2016/01/28 17:29:43 2063] Drive sn 5YD1AWGR 5YD1AWGR [2016/01/28 17:29:43 2063] Take /dev/sdc 5YD1AWGR away [2016/01/28 17:29:43 2063] Drive sn 5YD17Q6F [2016/01/28 17:29:43 2063] Drive sn 5YD17Q6F 5YD17Q6F [2016/01/28 17:29:43 2063] Take /dev/sdd 5YD17Q6F away [2016/01/28 17:29:43 2063] Drive sn 5YD19PC1 [2016/01/28 17:29:43 2063] Drive sn 5YD19PC1 5YD19PC1 [2016/01/28 17:29:43 2063] Take /dev/sde 5YD19PC1 away [2016/01/28 17:29:43 2063] Drive sn Z340PDXM [2016/01/28 17:29:43 2063] Drive sn Z340PDXM Z340PDXM [2016/01/28 17:29:43 2063] Take /dev/sdf Z340PDXM away [2016/01/28 17:29:43 2063] Total missing: 1, index=1 [2016/01/28 17:29:43 2063] found missing drive: /dev/sdb 5YD17DPM MD_NULL blocks=1953514584 1953511996 1953511996 [2016/01/28 17:29:43 2063] XRAID cfg_pt = Z340PDXM, these two drives size close: 0 /dev/sdb 3907018584, 0 /dev/sda 1953514584 [2016/01/28 17:29:43 2063] Changed drive selction: s=4/0, t=0/0, file=/var/log/frontview/.known_cfgdir/Z340PDXM [2016/01/28 17:29:43 2063] Changed drive selction: s=4/0, t=0/0, file=/var/log/frontview/.known_cfgdir/Z340PDXM [2016/01/28 17:29:51 2063] Repair md: use disk /dev/sdb, size 3907018584 [2016/01/28 17:29:54 2063] LINE 1505: exec command: sgdisk -Z /dev/sdb Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. [2016/01/28 17:29:56 2063] Update new added disk information: from /dev/sdf to /dev/sdb [2016/01/28 17:29:56 2063] 000 /dev/sdb,S30126C4,8, 16, 3907018584 0 0 NO_MD 0 [2016/01/28 17:29:56 2063] partitions: 0 property: UNKNOWN [2016/01/28 17:29:56 2063] 000 /dev/sdf,Z340PDXM,8, 80, 1953514584 1953511996 1953511996 MD_FULL 0 [2016/01/28 17:29:56 2063] partitions: 3 property: 4K [2016/01/28 17:29:56 2063] 000 /dev/sdf1, 8, 81, 4194304 MD_FULL md=/dev/md0 [2016/01/28 17:29:56 2063] 001 /dev/sdf2, 8, 82, 524288 MD_FULL md=/dev/md1 [2016/01/28 17:29:56 2063] 002 /dev/sdf5, 8, 85, 1948793404 MD_FULL md=/dev/md2 [2016/01/28 17:29:56 2063] 000 /dev/sdb,S30126C4,8, 16, 3907018584 1953511996 1953511996 NO_MD 0 [2016/01/28 17:29:56 2063] partitions: 3 property: UNKNOWN [2016/01/28 17:29:56 2063] 000 /dev/sdb1, 0, 0, 4194304 NO_MD md= [2016/01/28 17:29:56 2063] 001 /dev/sdb2, 0, 0, 524288 NO_MD md= [2016/01/28 17:29:56 2063] 002 /dev/sdb5, 0, 0, 1948793404 NO_MD md= [2016/01/28 17:29:56 2063] Get format of pt /var/log/frontview/.known_cfgdir/Z340PDXM [2016/01/28 17:29:56 2063] Error unkonwn partition table /var/log/frontview/.known_cfgdir/Z340PDXM !!!! [2016/01/28 17:29:56 2063] Need to increase ext size from 0 to 0 [2016/01/28 17:29:56 2063] LINE 3717: exec command: sfdisk --force -L /dev/sdb < /var/log/frontview/.V_E.source_pt1.sav Disk /dev/sdb: 486401 cylinders, 255 heads, 63 sectors/track sfdisk: ERROR: sector 0 does not have an msdos signature /dev/sdb: unrecognized partition table type Old situation: No partitions found New situation: No partitions found sfdisk: no partition table present. [2016/01/28 17:29:56 2063] Input target disk 4K align partition problem: 256, use old partition. [2016/01/28 17:29:56 2063] LINE 3723: exec command: sfdisk --force -L /dev/sdb < /var/log/frontview/.known_cfgdir/Z340PDXM Disk /dev/sdb: 486401 cylinders, 255 heads, 63 sectors/track sfdisk: ERROR: sector 0 does not have an msdos signature /dev/sdb: unrecognized partition table type Old situation: No partitions found New situation: No partitions found sfdisk: no partition table present. [2016/01/28 17:29:56 2063] LINE 3730: exec command: sgdisk -g -a 8 /dev/sdb Creating new GPT entries. The operation has completed successfully. [2016/01/28 17:29:58 2063] Succeed to convert mbr to gpt /dev/sda. [2016/01/28 17:29:58 2063] Input target disk partition problem: 256 [2016/01/28 17:29:58 2063] Failed to add disk to md [2016/01/28 17:29:58 2063] Added drive to md, grown=0/0xffa3a1f8 [2016/01/28 17:29:58 2063] LINE 4852: exec command: /usr/sbin/expand_md -a super >> /var/log/frontview/expand_md.log 2>&1 & [2016/01/28 17:29:58 2063] LINE 4855: exec command: /frontview/bin/volumescan & [2016/01/28 17:29:58 2063] LINE 4939: exec command: ps -ef | grep expand_md | grep -v grep > /var/log/frontview/.V_E.snapshotstat [2016/01/28 17:29:58 2063] STAGE_WIPE: Clean my_pid 2063 [2016/01/28 17:29:59 4066] STAGE_CHECK: saved my_pid 4066 in check mode. [2016/01/28 17:29:59 4066] RAID MODE: 1, sn= [2016/01/28 17:29:59 4066] LINE 4591: exec command: lvs > /var/log/frontview/.V_E.snapshotstat [2016/01/28 17:29:59 4066] Current file system status: ext4 [2016/01/28 17:29:59 4066] LINE 5305: exec command: rm -fr /var/log/frontview/.V_E.* [2016/01/28 17:30:00 4066] Running disk SMART quick self-test on new disk 2 [/dev/sdb]... [2016/01/28 17:31:00 4066] PASSED [2016/01/28 17:31:00 4066] LINE 1144: exec command: killall -HUP monitor_enclosure [2016/01/28 17:31:01 4066] X_level: 5 [2016/01/28 17:31:01 4066] /usr/sbin/expand_md -a super [2016/01/28 17:31:01 4066] MD degraded 1, bg_job 0, boot 0 [2016/01/28 17:31:01 4066] STAGE_WIPE: Clean my_pid 4066
- StephenBJan 29, 2016Guru - Experienced User
It looks promising, please post back when the dust all settles and confirm that everything is up again.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!