NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
disk failure
5 TopicsDetected increasing uncorrectable error count
I have what I think is a drive failing, but I cannot locate in the manual what I need to do to replace this drive? It seems to be Disk 2 but I cannot tell which phsyical drive this is, as in do the numbers start from the right or the left and also do they start Disk 0 to Disk 3 or do they start Disk 1 to Disk 4? I have attached an image of the Log file, only found this when trying to update the firmware, had not received any e-mails so first that I had heard about it. Also what is the proceedure to change this disk? again cannot seem to locate a guide to replacing a drive that is faulty, can I do this while the ReadyNAS unit is switched on and working, or do I shut the unit down and replace the drive when the unit is switched off? Again unsure as to what drive I need to pull from the unit Any help would be much appreciated... James1.6KViews0likes6CommentsReadyNAS 104 Disk Failure
I have what I think is a drive failing, but I cannot locate in the manual what I need to do to replace this drive? It seems to be Disk 2 but I cannot tell which phsyical drive this is, as in do the numbers start from the right or the left and also do they start Disk 0 to Disk 3 or do they start Disk 1 to Disk 4? I have attached an image of the Log file, only found this when trying to update the firmware, had not received any e-mails so first that I had heard about it. Also what is the proceedure to change this disk? again cannot seem to locate a guide to replacing a drive that is faulty, can I do this while the ReadyNAS unit is switched on and working, or do I shut the unit down and replace the drive when the unit is switched off? Again unsure as to what drive I need to pull from the unit Any help would be much appreciated... James711Views0likes1CommentNV+ Stuck on (hangs on) Booting...
It's actually a RND 4000 v3 (RAIDiator 4.1.15) 4x seagate Barracuda 2TB. I think it's RAID 5 (I know I'm supposed to know, but I don't... not for sure anyway) Hi there. My NV+ is stuck on Booting...Even with browsing the discussions on 'stuck on booting' voor similar devices, I'm stuck with the next step myself. After it hangs, it will only power dwon by pulling the plug. No remote acces possible. I tried the booting with FW update option... that didn't work.... FW updated o.k. I guess, but it got stuck on Booting.... again. Same thing for the Memory-test option. After the powering-down (pulling the plug again) I numberd the drives and ran them through Seatools on my PC, performing the Short Generic test. Numbers 1,3,4 passed OK. No. 2 makes some worrying clicking noises after being connected. Test won't run (stuck at 0% at first test, the 'outer scan'). I guess that disk is completely dead. So what is next? Most interested in saving data of course. (yes I do back up regularly, but even the most recent stuff is worth saving). Any advice greatly apreciated. JasperSolved3.2KViews0likes4CommentsReadyNAS Pro 6 not rebuilding raid after entering new disk
Drive 6 failed on my setup of my ReadyNAS Pro 6 (on RAIDiator 4.2.30) . Just entered a new drive of the same make and type drive into the slot. My frontview and log files show conflicting information as the logfile state the system is rebuilding whilst the status shows the system is non redundant and drive 6 has failed. I looked on the system through SSH and looked at the syslog (this log file tells me NAS-HOME kernel: sdf: unknown partition table) and the expand_md.log file (a portion is entered below) [2017/01/02 12:58:53 2162] Boot, handle mutiple expand_md [2017/01/02 12:58:54 2163] STAGE_CHECK: saved my_pid 2163 in check mode. [2017/01/02 12:58:54 2163] RAID MODE: 1, sn= [2017/01/02 12:58:54 2163] LINE 4591: exec command: lvs > /var/log/frontview/.V_E.snapshotstat [2017/01/02 12:58:54 2163] Current file system status: offline [2017/01/02 12:58:54 2163] LINE 5305: exec command: rm -fr /var/log/frontview/.V_E.* [2017/01/02 12:58:54 2163] Running disk SMART quick self-test on new disk 6 [/dev/sdf]... [2017/01/02 13:00:54 2163] PASSED [2017/01/02 13:00:54 2163] LINE 1144: exec command: killall -HUP monitor_enclosure [2017/01/02 13:00:55 2163] X_level: 5 [2017/01/02 13:00:55 2163] /usr/sbin/expand_md [2017/01/02 13:00:55 2163] MD degraded 1, bg_job 0, boot 1 [2017/01/02 13:00:55 2163] Read 1957072 byte from configuration [2017/01/02 13:00:55 2163] Disk configuration matching with online configuration. [2017/01/02 13:00:55 2163] May need to fix array. [2017/01/02 13:00:55 2163] new_disk_pt_count: 1 [2017/01/02 13:00:55 2163] * new disk: /dev/sdf [2017/01/02 13:00:55 2163] ===== Partition Entry (MD not used) ===== [2017/01/02 13:00:55 2163] 000 /dev/sdf,WD-WMAY00519462,8, 80, 1953514584 0 0 NO_MD 0 [2017/01/02 13:00:55 2163] partitions: 0 property: UNKNOWN [2017/01/02 13:00:55 2163] ===== Partition Entry (Used by MD) ===== [2017/01/02 13:00:55 2163] 000 /dev/sda,WD-WMAY00561588,8, 0, 1953514584 1953512030 1953512030 MD_FULL 0 [2017/01/02 13:00:55 2163] partitions: 3 property: 4K [2017/01/02 13:00:55 2163] 000 /dev/sda1, 8, 1, 4194304 MD_FULL md=/dev/md0 [2017/01/02 13:00:55 2163] 001 /dev/sda2, 8, 2, 524288 MD_FULL md=/dev/md1 [2017/01/02 13:00:55 2163] 002 /dev/sda5, 8, 5, 1948793438 MD_FULL md=/dev/md2 [2017/01/02 13:00:55 2163] 001 /dev/sdc,WD-WMAY00519101,8, 32, 1953514584 1953512030 1953512030 MD_FULL 0 [2017/01/02 13:00:55 2163] partitions: 3 property: 4K [2017/01/02 13:00:55 2163] 000 /dev/sdc1, 8, 33, 4194304 MD_FULL md=/dev/md0 [2017/01/02 13:00:55 2163] 001 /dev/sdc2, 8, 34, 524288 MD_FULL md=/dev/md1 [2017/01/02 13:00:55 2163] 002 /dev/sdc5, 8, 37, 1948793438 MD_FULL md=/dev/md2 [2017/01/02 13:00:55 2163] 002 /dev/sde,WD-WMAY00446595,8, 64, 1953514584 1953512030 1953512030 MD_FULL 0 [2017/01/02 13:00:55 2163] partitions: 3 property: 1SECTOR [2017/01/02 13:00:55 2163] 000 /dev/sde1, 8, 65, 4194304 MD_FULL md=/dev/md0 [2017/01/02 13:00:55 2163] 001 /dev/sde2, 8, 66, 524288 MD_FULL md=/dev/md1 [2017/01/02 13:00:55 2163] 002 /dev/sde5, 8, 69, 1948793438 MD_FULL md=/dev/md2 [2017/01/02 13:00:55 2163] 003 /dev/sdb,WD-WCC4E0420327,8, 16, 3907018584 3907016481 3907016481 MD_FULL 0 [2017/01/02 13:00:55 2163] partitions: 4 property: 4K [2017/01/02 13:00:55 2163] 000 /dev/sdb1, 8, 17, 4194304 MD_FULL md=/dev/md0 [2017/01/02 13:00:55 2163] 001 /dev/sdb2, 8, 18, 524288 MD_FULL md=/dev/md1 [2017/01/02 13:00:55 2163] 002 /dev/sdb5, 8, 21, 1948793438 MD_FULL md=/dev/md2 [2017/01/02 13:00:55 2163] 003 /dev/sdb6, 8, 22, 1953504451 MD_FULL md=/dev/md3 [2017/01/02 13:00:55 2163] 004 /dev/sdd,WD-WCC4E0426359,8, 48, 3907018584 3907016481 3907016481 MD_FULL 0 [2017/01/02 13:00:55 2163] partitions: 4 property: 4K [2017/01/02 13:00:55 2163] 000 /dev/sdd1, 8, 49, 4194304 MD_FULL md=/dev/md0 [2017/01/02 13:00:55 2163] 001 /dev/sdd2, 8, 50, 524288 MD_FULL md=/dev/md1 [2017/01/02 13:00:55 2163] 002 /dev/sdd5, 8, 53, 1948793438 MD_FULL md=/dev/md2 [2017/01/02 13:00:55 2163] 003 /dev/sdd6, 8, 54, 1953504451 MD_FULL md=/dev/md3 [2017/01/02 13:00:55 2163] in find_drive_... looking at /dev/md3 [2017/01/02 13:00:55 2163] in find_drive_... looking at /dev/md2 [2017/01/02 13:00:55 2163] found drive has the needed size for /dev/md2: /dev/sdd, 3907018584, 3907016481 [2017/01/02 13:00:55 2163] Pre-scan found no usable drives: 4/-1. [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00561588 [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00561588 WD-WMAY00561588 [2017/01/02 13:00:55 2163] Take /dev/sda WD-WMAY00561588 away [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00519101 [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00519101 WD-WMAY00519101 [2017/01/02 13:00:55 2163] Take /dev/sdc WD-WMAY00519101 away [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00446595 [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00446595 WD-WMAY00446595 [2017/01/02 13:00:55 2163] Take /dev/sde WD-WMAY00446595 away [2017/01/02 13:00:55 2163] Drive sn WD-WMAY00493613 [2017/01/02 13:00:55 2163] Drive WD-WMAY00493613 missing. [2017/01/02 13:00:55 2163] Drive sn WD-WCC4E0420327 [2017/01/02 13:00:55 2163] Drive sn WD-WCC4E0420327 WD-WCC4E0420327 [2017/01/02 13:00:55 2163] Take /dev/sdb WD-WCC4E0420327 away [2017/01/02 13:00:55 2163] Drive sn WD-WCC4E0426359 [2017/01/02 13:00:55 2163] Drive sn WD-WCC4E0426359 WD-WCC4E0426359 [2017/01/02 13:00:55 2163] Take /dev/sdd WD-WCC4E0426359 away [2017/01/02 13:00:55 2163] Total missing: 1, index=3 [2017/01/02 13:00:55 2163] found missing drive: /dev/sdf WD-WMAY00493613 MD_NULL blocks=1953514584 1953512030 1953512030 [2017/01/02 13:00:55 2163] XRAID cfg_pt = , these two drives size close: 0 /dev/sdf 1953514584, 4 /dev/sdh 1953514584 [2017/01/02 13:00:55 2163] Changed drive selction: s=4/4, t=-1/0, file=/var/log/frontview/.known_cfgdir/ [2017/01/02 13:00:55 2163] Needed pt file /var/log/frontview/.known_cfgdir/ missing, [2017/01/02 13:00:55 2163] Changed drive selction: s=4/4, t=-1/0, file= [2017/01/02 13:00:55 2163] Repair md: use disk /dev/sdf, size 1953514584 [2017/01/02 13:00:58 2163] LINE 1505: exec command: sgdisk -Z /dev/sdf Creating new GPT entries. GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. [2017/01/02 13:01:01 2163] Update new added disk information: from /dev/sdi to /dev/sdf [2017/01/02 13:01:01 2163] 000 /dev/sdf,WD-WMAY00519462,8, 80, 1953514584 0 0 NO_MD 0 [2017/01/02 13:01:01 2163] partitions: 0 property: UNKNOWN [2017/01/02 13:01:01 2163] 000 /dev/sdi,,8, 128, 1953514584 1953513472 1953513472 NO_MD 1 [2017/01/02 13:01:01 2163] partitions: 1 property: UNKNOWN [2017/01/02 13:01:01 2163] 000 /dev/sdi1, 8, 129, 1953513472 NO_MD md= [2017/01/02 13:01:01 2163] 000 /dev/sdf,WD-WMAY00519462,8, 80, 1953514584 1953513472 1953513472 NO_MD 1 [2017/01/02 13:01:01 2163] partitions: 1 property: UNKNOWN [2017/01/02 13:01:01 2163] 000 /dev/sdf1, 0, 0, 1953513472 NO_MD md= [2017/01/02 13:01:01 2163] gpt sig:0,mbr sig:0,fake type:0 [2017/01/02 13:01:01 2163] get disk /dev/sdd format is (GPT=2,MBR=1,MX=3,MISC=-1): 2 [2017/01/02 13:01:01 2163] LINE 7921: exec command: sgdisk -p /dev/sdd | grep '[0-9] ' > /var/log/frontview/.V_E.source_pt.sav [2017/01/02 13:01:02 2163] Get format of pt /var/log/frontview/.V_E.source_pt.sav [2017/01/02 13:01:02 2163] pt is GPT format [2017/01/02 13:01:02 2163] dump pt list to /dev/sdf root: 0 [2017/01/02 13:01:02 2163] index: 1 start=64 end=8388671 last_end=0 [2017/01/02 13:01:02 2163] index: 2 start=8388672 end=9437247 last_end=8388671 [2017/01/02 13:01:02 2163] index: 5 start=9437256 end=3907024131 last_end=9437247 [2017/01/02 13:01:02 2163] index: 6 start=3907024136 end=7814033038 last_end=3907024131 [2017/01/02 13:01:02 2163] LINE 8038: exec command: sgdisk -g -a 8 -n 1:64:8388671 -t 1:FD00 -n 2:8388672:9437247 -t 2:FD00 -n 5:9437256:3907024131 -t 5:FD00 -n 6:3907024136:7814033038 -t 6:FD00 /dev/sdf Creating new GPT entries. Could not create partition 5 from 3907024136 to 7814033038 Could not change partition 6's type code to FD00! Error encountered; not saving changes. [2017/01/02 13:01:03 2163] partition copy error /dev/sdf rc = 400 [2017/01/02 13:01:03 2163] Copy partition table FAIL rc=-3!! [2017/01/02 13:01:03 2163] Failed to add disk to md [2017/01/02 13:01:03 2163] Added drive to md, grown=0/0xff869778 [2017/01/02 13:01:03 2163] LINE 4852: exec command: /usr/sbin/expand_md -a super >> /var/log/frontview/expand_md.log 2>&1 & [2017/01/02 13:01:03 2163] LINE 4855: exec command: /frontview/bin/volumescan & [2017/01/02 13:01:03 2163] LINE 4939: exec command: ps -ef | grep expand_md | grep -v grep > /var/log/frontview/.V_E.snapshotstat [2017/01/02 13:01:04 2163] STAGE_WIPE: Clean my_pid 2163 [2017/01/02 13:02:05 4622] STAGE_CHECK: saved my_pid 4622 in check mode. [2017/01/02 13:02:05 4622] RAID MODE: 1, sn= [2017/01/02 13:02:05 4622] LINE 4591: exec command: lvs > /var/log/frontview/.V_E.snapshotstat [2017/01/02 13:02:05 4622] Current file system status: ext4 [2017/01/02 13:02:05 4622] LINE 5305: exec command: rm -fr /var/log/frontview/.V_E.* [2017/01/02 13:02:05 4622] Running disk SMART quick self-test on new disk 6 [/dev/sdf]... [2017/01/02 13:04:06 4622] PASSED [2017/01/02 13:04:06 4622] LINE 1144: exec command: killall -HUP monitor_enclosure [2017/01/02 13:04:07 4622] X_level: 5 [2017/01/02 13:04:07 4622] /usr/sbin/expand_md -a super [2017/01/02 13:04:07 4622] MD degraded 1, bg_job 0, boot 0 [2017/01/02 13:04:07 4622] STAGE_WIPE: Clean my_pid 4622 Tried to do manual zapping of the partition and rebooted, that did not work. I could do with some help here how to get the system running properly again. Thanks in advance for all of your wisdom and support6.6KViews0likes32Comments"health changed from Redundant to Degraded"
And what Should I do about it. Is it as simple as shutting down the nas, replacing the bad disk with a like one, and then powering back up? The "System" "Volumes" web page shows both disks with no indication that only 1 disk is active (which is what I thought I would see). ReadyNAS 104, Firmware 6.4.2, X-RAID, RAID-1, I have a raid 1 setup with 2x2TB disks (WD). Both disks are much less than a year old old. Way too soon for failure. This is home (i.e. fairly light) use. The problematic disk, as per the bit of log below, shows the "Disk State" as ONLINE. Oddly, it shows the "Temperature" at -1. Here's the last bit of the logs: Volume: Volume data health changed from Redundant to Degraded. Fri Apr 22 2016 16:57:21 Disk: Detected high uncorrectable error count: [984] on disk 1 (Internal) [ST2000DM001-9YN164, W1E14G3B]. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Fri Apr 22 2016 16:57:20 Disk: Detected increasing pending sector: count [984] on disk 1 (Internal) [ST2000DM001-9YN164, W1E14G3B] 13 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Thu Apr 21 2016 22:47:05 Disk: Detected increasing reallocated sector count: [20792] on disk 1 (Internal) [ST2000DM001-9YN164 W1E14G3B] 93 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Thu Apr 21 2016 22:44:59 Disk: Detected increasing reallocated sector count: [20688] on disk 1 (Internal) [ST2000DM001-9YN164 W1E14G3B] 92 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Thu Apr 21 2016 22:42:52 Disk: Detected increasing reallocated sector count: [20648] on disk 1 (Internal) [ST2000DM001-9YN164 W1E14G3B] 91 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Thu Apr 21 2016 22:11:29 Disk: Detected increasing reallocated sector count: [20632] on disk 1 (Internal) [ST2000DM001-9YN164 W1E14G3B] 90 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Thu Apr 21 2016 22:03:16 Disk: Detected increasing reallocated sector count: [20616] on disk 1 (Internal) [ST2000DM001-9YN164 W1E14G3B] 89 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Thu Apr 21 2016 22:01:04 Disk: Detected increasing reallocated sector count: [20544] on disk 1 (Internal) [ST2000DM001-9YN164 W1E14G3B] 88 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Thu Apr 21 2016 21:54:56 Disk: Detected increasing reallocated sector count: [20536] on disk 1 (Internal) [ST2000DM001-9YN164 W1E14G3B] 87 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Thu Apr 21 2016 21:17:07 Disk: Detected increasing reallocated sector count: [20512] on disk 1 (Internal) [ST2000DM001-9YN164 W1E14G3B] 86 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Tue Apr 19 2016 10:31:39 Disk: Detected increasing reallocated sector count: [20480] on disk 1 (Internal) [ST2000DM001-9YN164 W1E14G3B] 85 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Mon Apr 18 2016 21:08:44 Disk: Detected increasing reallocated sector count: [20472] on disk 1 (Internal) [ST2000DM001-9YN164 W1E14G3B] 84 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Mon Apr 18 2016 21:04:32 Disk: Detected increasing reallocated sector count: [20464] on disk 1 (Internal) [ST2000DM001-9YN164 W1E14G3B] 83 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy. Mon Apr 18 2016 21:00:20 Disk: Detected increasing reallocated sector count: [20424] on disk 1 (Internal) [ST2000DM001-9YN164 W1E14G3B] 82 times in the past 30 days. This condition often indicates an impending failure. Be prepared to replace this disk to maintain data redundancy.Solved3.1KViews0likes3Comments