NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
ReadyNAS Pro 6
4 TopicsReadyNAS Pro 6 not rebuilding raid after entering new disk
Drive 6 failed on my setup of my ReadyNAS Pro 6 (on RAIDiator 4.2.30) . Just entered a new drive of the same make and type drive into the slot. My frontview and log files show conflicting information as the logfile state the system is rebuilding whilst the status shows the system is non redundant and drive 6 has failed. I looked on the system through SSH and looked at the syslog (this log file tells me NAS-HOME kernel: sdf: unknown partition table) and the expand_md.log file (a portion is entered below) [2017/01/02 12:58:53 2162] Boot, handle mutiple expand_md [2017/01/02 12:58:54 2163] STAGE_CHECK: saved my_pid 2163 in check mode. [2017/01/02 12:58:54 2163] RAID MODE: 1, sn= [2017/01/02 12:58:54 2163] LINE 4591: exec command: lvs > /var/log/frontview/.V_E.snapshotstat [2017/01/02 12:58:54 2163] Current file system status: offline [2017/01/02 12:58:54 2163] LINE 5305: exec command: rm -fr /var/log/frontview/.V_E.* [2017/01/02 12:58:54 2163] Running disk SMART quick self-test on new disk 6 [/dev/sdf]... [2017/01/02 13:00:54 2163] PASSED [2017/01/02 13:00:54 2163] LINE 1144: exec command: killall -HUP monitor_enclosure [2017/01/02 13:00:55 2163] X_level: 5 [2017/01/02 13:00:55 2163] /usr/sbin/expand_md [2017/01/02 13:00:55 2163] MD degraded 1, bg_job 0, boot 1 [2017/01/02 13:00:55 2163] Read 1957072 byte from configuration [2017/01/02 13:00:55 2163] Disk configuration matching with online configuration. [2017/01/02 13:00:55 2163] May need to fix array. [2017/01/02 13:00:55 2163] new_disk_pt_count: 1 [2017/01/02 13:00:55 2163] * new disk: /dev/sdf [2017/01/02 13:00:55 2163] ===== Partition Entry (MD not used) ===== [2017/01/02 13:00:55 2163] 000 /dev/sdf,WD-WMAY00519462,8, 80, 1953514584 0 0 NO_MD 0 [2017/01/02 13:00:55 2163] partitions: 0 property: UNKNOWN [2017/01/02 13:00:55 2163] ===== Partition Entry (Used by MD) ===== [2017/01/02 13:00:55 2163] 000 /dev/sda,WD-WMAY00561588,8, 0, 1953514584 1953512030 1953512030 MD_FULL 0 [2017/01/02 13:00:55 2163] partitions: 3 property: 4K [2017/01/02 13:00:55 2163] 000 /dev/sda1, 8, 1, 4194304 MD_FULL md=/dev/md0 [2017/01/02 13:00:55 2163] 001 /dev/sda2, 8, 2, 524288 MD_FULL md=/dev/md1 [2017/01/02 13:00:55 2163] 002 /dev/sda5, 8, 5, 1948793438 MD_FULL md=/dev/md2 [2017/01/02 13:00:55 2163] 001 /dev/sdc,WD-WMAY00519101,8, 32, 1953514584 1953512030 1953512030 MD_FULL 0 [2017/01/02 13:00:55 2163] partitions: 3 property: 4K [2017/01/02 13:00:55 2163] 000 /dev/sdc1, 8, 33, 4194304 MD_FULL md=/dev/md0 [2017/01/02 13:00:55 2163] 001 /dev/sdc2, 8, 34, 524288 MD_FULL md=/dev/md1 [2017/01/02 13:00:55 2163] 002 /dev/sdc5, 8, 37, 1948793438 MD_FULL md=/dev/md2 [2017/01/02 13:00:55 2163] 002 /dev/sde,WD-WMAY00446595,8, 64, 1953514584 1953512030 1953512030 MD_FULL 0 [2017/01/02 13:00:55 2163] partitions: 3 property: 1SECTOR [2017/01/02 13:00:55 2163] 000 /dev/sde1, 8, 65, 4194304 MD_FULL md=/dev/md0 [2017/01/02 13:00:55 2163] 001 /dev/sde2, 8, 66, 524288 MD_FULL md=/dev/md1 [2017/01/02 13:00:55 2163] 002 /dev/sde5, 8, 69, 1948793438 MD_FULL md=/dev/md2 [2017/01/02 13:00:55 2163] 003 /dev/sdb,WD-WCC4E0420327,8, 16, 3907018584 3907016481 3907016481 MD_FULL 0 [2017/01/02 13:00:55 2163] partitions: 4 property: 4K [2017/01/02 13:00:55 2163] 000 /dev/sdb1, 8, 17, 4194304 MD_FULL md=/dev/md0 [2017/01/02 13:00:55 2163] 001 /dev/sdb2, 8, 18, 524288 MD_FULL md=/dev/md1 [2017/01/02 13:00:55 2163] 002 /dev/sdb5, 8, 21, 1948793438 MD_FULL md=/dev/md2 [2017/01/02 13:00:55 2163] 003 /dev/sdb6, 8, 22, 1953504451 MD_FULL md=/dev/md3 [2017/01/02 13:00:55 2163] 004 /dev/sdd,WD-WCC4E0426359,8, 48, 3907018584 3907016481 3907016481 MD_FULL 0 [2017/01/02 13:00:55 2163] partitions: 4 property: 4K [2017/01/02 13:00:55 2163] 000 /dev/sdd1, 8, 49, 4194304 MD_FULL md=/dev/md0 [2017/01/02 13:00:55 2163] 001 /dev/sdd2, 8, 50, 524288 MD_FULL md=/dev/md1 [2017/01/02 13:00:55 2163] 002 /dev/sdd5, 8, 53, 1948793438 MD_FULL md=/dev/md2 [2017/01/02 13:00:55 2163] 003 /dev/sdd6, 8, 54, 1953504451 MD_FULL md=/dev/md3 [2017/01/02 13:00:55 2163] in find_drive_... looking at /dev/md3 [2017/01/02 13:00:55 2163] in find_drive_... looking at /dev/md2 [2017/01/02 13:00:55 2163] found drive has the needed size for /dev/md2: /dev/sdd, 3907018584, 3907016481 [2017/01/02 13:00:55 2163] Pre-scan found no usable drives: 4/-1. [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00561588 [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00561588 WD-WMAY00561588 [2017/01/02 13:00:55 2163] Take /dev/sda WD-WMAY00561588 away [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00519101 [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00519101 WD-WMAY00519101 [2017/01/02 13:00:55 2163] Take /dev/sdc WD-WMAY00519101 away [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00446595 [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00446595 WD-WMAY00446595 [2017/01/02 13:00:55 2163] Take /dev/sde WD-WMAY00446595 away [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00493613 [2017/01/02 13:00:55 2163] Drive WD-WMAY00493613 missing. [2017/01/02 13:00:55 2163] Drive sn WD-WCC4E0420327 [2017/01/02 13:00:55 2163] Drive sn WD-WCC4E0420327 WD-WCC4E0420327 [2017/01/02 13:00:55 2163] Take /dev/sdb WD-WCC4E0420327 away [2017/01/02 13:00:55 2163] Drive sn WD-WCC4E0426359 [2017/01/02 13:00:55 2163] Drive sn WD-WCC4E0426359 WD-WCC4E0426359 [2017/01/02 13:00:55 2163] Take /dev/sdd WD-WCC4E0426359 away [2017/01/02 13:00:55 2163] Total missing: 1, index=3 [2017/01/02 13:00:55 2163] found missing drive: /dev/sdf WD-WMAY00493613 MD_NULL blocks=1953514584 1953512030 1953512030 [2017/01/02 13:00:55 2163] XRAID cfg_pt = , these two drives size close: 0 /dev/sdf 1953514584, 4 /dev/sdh 1953514584 [2017/01/02 13:00:55 2163] Changed drive selction: s=4/4, t=-1/0, file=/var/log/frontview/.known_cfgdir/ [2017/01/02 13:00:55 2163] Needed pt file /var/log/frontview/.known_cfgdir/ missing, [2017/01/02 13:00:55 2163] Changed drive selction: s=4/4, t=-1/0, file= [2017/01/02 13:00:55 2163] Repair md: use disk /dev/sdf, size 1953514584 [2017/01/02 13:00:58 2163] LINE 1505: exec command: sgdisk -Z /dev/sdf Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. [2017/01/02 13:01:01 2163] Update new added disk information: from /dev/sdi to /dev/sdf [2017/01/02 13:01:01 2163] 000 /dev/sdf,WD-WMAY00519462,8, 80, 1953514584 0 0 NO_MD 0 [2017/01/02 13:01:01 2163] partitions: 0 property: UNKNOWN [2017/01/02 13:01:01 2163] 000 /dev/sdi,,8, 128, 1953514584 1953513472 1953513472 NO_MD 1 [2017/01/02 13:01:01 2163] partitions: 1 property: UNKNOWN [2017/01/02 13:01:01 2163] 000 /dev/sdi1, 8, 129, 1953513472 NO_MD md= [2017/01/02 13:01:01 2163] 000 /dev/sdf,WD-WMAY00519462,8, 80, 1953514584 1953513472 1953513472 NO_MD 1 [2017/01/02 13:01:01 2163] partitions: 1 property: UNKNOWN [2017/01/02 13:01:01 2163] 000 /dev/sdf1, 0, 0, 1953513472 NO_MD md= [2017/01/02 13:01:01 2163] gpt sig:0,mbr sig:0,fake type:0 [2017/01/02 13:01:01 2163] get disk /dev/sdd format is (GPT=2,MBR=1,MX=3,MISC=-1): 2 [2017/01/02 13:01:01 2163] LINE 7921: exec command: sgdisk -p /dev/sdd | grep '[0-9] ' > /var/log/frontview/.V_E.source_pt.sav [2017/01/02 13:01:02 2163] Get format of pt /var/log/frontview/.V_E.source_pt.sav [2017/01/02 13:01:02 2163] pt is GPT format [2017/01/02 13:01:02 2163] dump pt list to /dev/sdf root: 0 [2017/01/02 13:01:02 2163] index: 1 start=64 end=8388671 last_end=0 [2017/01/02 13:01:02 2163] index: 2 start=8388672 end=9437247 last_end=8388671 [2017/01/02 13:01:02 2163] index: 5 start=9437256 end=3907024131 last_end=9437247 [2017/01/02 13:01:02 2163] index: 6 start=3907024136 end=7814033038 last_end=3907024131 [2017/01/02 13:01:02 2163] LINE 8038: exec command: sgdisk -g -a 8 -n 1:64:8388671 -t 1:FD00 -n 2:8388672:9437247 -t 2:FD00 -n 5:9437256:3907024131 -t 5:FD00 -n 6:3907024136:7814033038 -t 6:FD00 /dev/sdf Creating new GPT entries. Could not create partition 5 from 3907024136 to 7814033038 Could not change partition 6's type code to FD00! Error encountered; not saving changes. [2017/01/02 13:01:03 2163] partition copy error /dev/sdf rc = 400 [2017/01/02 13:01:03 2163] Copy partition table FAIL rc=-3!! [2017/01/02 13:01:03 2163] Failed to add disk to md [2017/01/02 13:01:03 2163] Added drive to md, grown=0/0xff869778 [2017/01/02 13:01:03 2163] LINE 4852: exec command: /usr/sbin/expand_md -a super >> /var/log/frontview/expand_md.log 2>&1 & [2017/01/02 13:01:03 2163] LINE 4855: exec command: /frontview/bin/volumescan & [2017/01/02 13:01:03 2163] LINE 4939: exec command: ps -ef | grep expand_md | grep -v grep > /var/log/frontview/.V_E.snapshotstat [2017/01/02 13:01:04 2163] STAGE_WIPE: Clean my_pid 2163 [2017/01/02 13:02:05 4622] STAGE_CHECK: saved my_pid 4622 in check mode. [2017/01/02 13:02:05 4622] RAID MODE: 1, sn= [2017/01/02 13:02:05 4622] LINE 4591: exec command: lvs > /var/log/frontview/.V_E.snapshotstat [2017/01/02 13:02:05 4622] Current file system status: ext4 [2017/01/02 13:02:05 4622] LINE 5305: exec command: rm -fr /var/log/frontview/.V_E.* [2017/01/02 13:02:05 4622] Running disk SMART quick self-test on new disk 6 [/dev/sdf]... [2017/01/02 13:04:06 4622] PASSED [2017/01/02 13:04:06 4622] LINE 1144: exec command: killall -HUP monitor_enclosure [2017/01/02 13:04:07 4622] X_level: 5 [2017/01/02 13:04:07 4622] /usr/sbin/expand_md -a super [2017/01/02 13:04:07 4622] MD degraded 1, bg_job 0, boot 0 [2017/01/02 13:04:07 4622] STAGE_WIPE: Clean my_pid 4622 Tried to do manual zapping of the partition and rebooted, that did not work. I could do with some help here how to get the system running properly again. Thanks in advance for all of your wisdom and support6.6KViews0likes32CommentsIt is possible to disable the auto boot on Power on for ReadyNAS Pro (6) OS 6.4.1?
I have a legacy ReadyNAS Pro and a ReadyNAS Pro 6 (RNDP6000-V2) Both are running 6.4.1 Both have BIOS 07/26/2010 FLAME6-MB V2.0 Unfortunately both automatically boot on mains power on. I really would prefer them to be manually powered on with the button on the front *after* the mains is applied. I've looked through the BIOS. No suitable options in there. Is this indeed possible? Spike2.7KViews0likes2CommentsReadyNAS Pro 6 Corrupt Root - Need Help
I am new to ReadyNAS, I have never owned one and scouting through the Forums I've not found my problem in any detail to assist me start using the ReadyNAS. I have just purchased off eBay a ReadyNAS Pro 6 with no disks or documentation. I have worked out I need RAIDar, I have my 6 Disks (3TB) in the ReadyNAS (there on the HCL). When I turn it on, in RAIDar it says Corrupt Root. It shows the model, IP address and all drives greyed out. Looking through the forum I understand that I need to do something for the ReadyNAS to see the drives. I need to do something to be able to setup the disks in a RAID for RAIDar to see the ReadyNAS completely. Also Admin page button on RAIDar says the page is not accessible. The Browse Button opens my Documents... Could someone tell me how I get the disks to be seen by the ReadyNAS and how to setup the RAID, RAID 6 is the advise I had to setup. There has to a process to make the disks work with teh RaedyNAS Can you keep it simple, I'm not that techno...? Thank youSolved5.9KViews0likes6CommentsReadyNAS How-To download RAIDar (what is the different with RAIDiator) 4.2.7 on PC
When I download either RAIDiator or RAIDar, I get some unknown file extension on RAIDiator-V4.2.27 and a Web-shortcut RAIDiator-V4.2.27. I assume this is a Firmware file. These two files are not .EXE that I need to install RAIDar V4.2.27 on the PC. From what I can tell I need RAIDar V4.2.27 for my ReadyNAS Pro 6 on my PC. I cannot find RAIDar .EXEon the site to download. Can someone tell me where I can download the .EXE, please. Back Ground: I had it on my PC and I had to reimage it as it crashed. I had (from Memory) RAIDar 4.2.27. I can’t remember precisely how I got in on my PC. I'm fairly certain I downloaded it from the ReadyNAS Site. but I can't find it again Thank youSolved5KViews0likes2Comments