NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
ManiacMike
Feb 07, 2017Aspirant
ReadyNAS Ultra 4 "Volume scan found errors that it could not easily correct."
Hi all! First time poster, long time lurker. After years of happy use of the Ultra 4, I have my first (major) issue. None of the shares are mounting, presumably due to the missing volumes. Fur...
ManiacMike
Feb 11, 2017Aspirant
Guess I'll just post here for posterity as there aren't any clear examples around on how to debug these issues or determine severity. Right now I'm still stuck in the information gathering phase to ensure any future actions do not corrupt what is there.
mke2fs has a sister command dumpe2fs which seems safer and also finds any backup superblocks. In my case, it's not so clean and spews out messages like
... 32768 free blocks, 2048 free inodes, 0 directories Group 31383: (Blocks 1028358144-1028390911) [INODE_UNINIT, ITABLE_ZEROED] Checksum 0x43bc, unused inodes 0 Block bitmap at 1028128775 (bg #31376 + 7), Inode bitmap at 1028128791 (bg #31376 + 23) Inode table at 1028129696-1028129823 (bg #31376 + 928) 32768 free blocks, 2048 free inodes, 0 directories Group 31384: (Blocks 1028390912-1028423679) [INODE_UNINIT, ITABLE_ZEROED] Checksum 0x9999, unused inodes 0 ...
I didn't let it finish as I believe I read somewhere that ReadyNAS has a non-default block size and am trying to determine if this is why fsck wants to relocate all inode block bitmaps.
# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md3 : active raid5 sda6[0] sdc6[2] sdb6[1]
3418630528 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md2 : active raid5 sda5[0] sdd5[3] sdc5[5] sdb5[4]
718431744 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md1 : active raid5 sda2[0] sdd2[3] sdc2[5] sdb2[4]
1572672 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md0 : active raid1 sdc1[5] sdb1[4] sda1[0] sdd1[3]
4194292 blocks super 1.2 [4/4] [UUUU]
All disks check out fine after the smartctl tests, sdd showed some signs of wear and should be replaced, but nothing at critical levels.
ManiacMike
Feb 24, 2017Aspirant
As an update, I got the four drives and ended up taking the drives out of the array one by one and cloning them using my desktop. A bit of research told me to use ddrescue rather than dd. Carefully checking each drive as I swapped them out I was able to run this:
sudo ddrescue -f /dev/sdc /dev/sdb drive1.log sudo ddrescue -f /dev/sdc /dev/sdb drive2.log sudo ddrescue -f /dev/sdc /dev/sdb drive3.log sudo ddrescue -f /dev/sdc /dev/sdb drive4.log
Each 2TB drive took 8 hours to copy. Now I've kicked off:
e2fsck -y -f -v /dev/c/c
In hindsight, I could and should have used the -C flag to know what's going on. It has been running three days now with the NAS CPU pegged at 99.7%. I tried to killall -USR1 e2fsck to no avail. I've checked /sys/block/sd*/stat and read and writes seem to be happening so I guess I'll just wait a few weeks to see if it finishes.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!