NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
wsopko
Sep 13, 2017Aspirant
ReadyNAS NV+ v2 - volume scan failed to run properly - after replacing failed drive
Hi,
I have a ReadyNAS NV+ v2, running RAIDiator 5.3.12 firmware. I replaced the drive in bay #4 after seeing these warnings in the logs (newest entry shown first):
Disk failure detected.
Detected increasing uncorrectable errors[3832] on disk 4 [ST3000DM001-1CH166, Z1F37YKQ]. This often indicates an impending failure. Please be prepared to replace this disk to maintain data redundancy.
I removed the bad drive from bay #4 (Seagate 3TB), and hot swapped with a new drive (Western Digital Red 3TB). The NAS started a resync automatically. When I checked the next day, I saw the following in the logs (newest entry shown first):
RAID sync finished on volume C. The array is still in degraded mode, however. This can be caused by a disk sync failure or failed disks in a multi-parity disk array.
RAID sync started on volume C.
Data volume will be rebuilt with disk 4.
New disk detected. If multiple disks have been added, they will be processed one at a time. Please do not remove any added disk(s) during this time. [Disk 4]
A disk was removed from the ReadyNAS. One or more RAID volumes are currently unprotected, and an additional disk failure or removal may result in data loss. Please add a replacement disk as soon as possible.
Disk removal detected. [Disk 4]
The front of the ReadyNAS at this point showed "Vol C: lifesupp!". I then rebooted the NAS, checking the option to do a volume scan at next boot. The NAS restarted, showing this in the logs (newest entry shown first):
Reallocated sector count has increased in the last day.
Disk 3:
Previous count: 0
Current count: 1
Growing SMART errors indicate a disk that may fail soon. If the errors continue to increase, you should be prepared to replace the disk.
System is up.
Volume scan failed to run properly.
Now All 4 drives show a status of OK, but the volume does not seem to be mounting (I can't see anything in the Volumes secton on the Overview tab in RAIDiator, where it normally shows you how big the volume is and the RAID type).
Drives 1, 2, and 4 all show a Raw Read Error Rate of 0, but drive 36 shows a Raw Read Error Rate of 6 in the SMART information section. I'm not sure if drive 3 had this raw read error rate before I replaced drive 4.
Do you think I should try to put the failed drive that I took out back into bay #4? Reason I'm thinking this is because before I replaced drive #4, the NAS was working fine (other than seeing the errors), and I could read and write to the volume. Or is it better to now try replacing drive #3 since it has raw read errors? Or is there soem things I can do via the command line to repair any errors in the filesystem?
Note: In case it matters, I have the "enable disk write cache" setting enabled.
I can ssh into the NAS, but the volume information doesn't seem right ( I have 4 3TB drives in the NAS, should be about 8TB of usable space):
root@readynas:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 4.0G 567M 3.3G 15% /
tmpfs 16K 0 16K 0% /USB
Thank you for any help you can provide!
6 Replies
Replies have been turned off for this discussion
- wsopkoAspirant
Yes, this is all the output I get when running df -h
root@readynas:/var/log# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md0 4.0G 567M 3.3G 15% /
tmpfs 16K 0 16K 0% /USB
When I run fdisk, I can see the 4 3TB drives listed:
root@readynas:/var/log# fdisk -l
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 3000.6 GB, 3000592982016 bytes
256 heads, 63 sectors/track, 363376 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 266306 2147483647+ ee GPT
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
256 heads, 63 sectors/track, 363376 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 266306 2147483647+ ee GPT
WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdc1 1 267350 2147483647+ ee GPT
WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdd1 1 267350 2147483647+ ee GPT
Disk /dev/md0: 4293 MB, 4293906432 bytes
2 heads, 4 sectors/track, 1048317 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/md1: 536 MB, 536858624 bytes
2 heads, 4 sectors/track, 131069 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/md1 doesn't contain a valid partition table
Here is the contents of /etc/fstab in case that helps:
root@readynas:/var/log# cat /etc/fstab
/dev/md0 / ext3 defaults,noatime 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
/dev/md1 swap swap defaults 0 0
/dev/c/c /c ext4 defaults,acl,nodelalloc,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,noatime 0 2
- StephenBGuru - Experienced User
fdisk won't show anything useful here. But it's clear from the df -h that the system hasn't mounted the data volume.
Do you have a backup of the data?
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!