NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Shadowlore
May 26, 2019Guide
ReadyNas 526X "Data: dead"?
ReadyNAS 526x
Firmware: 6.9.4 Hotfix 1 (will update to latest version once all this is sorted, or I'm happily convinced I'm not going to have to do a factory default.
6 drives, all WD Reds, in X-...
- May 28, 2019
Wow... So I finally got ahold of tech support, and Rene with tech support got the issue sorted out.
They actually discovered another issue with the unit.. here's the full info:
"There was some disputes on the superblock of /dev/sda so we adjusted it to sync with the other working drives. However /dev/sdb has not synced fully so I suggested to remove it from the array. Try to format /dev/sdb and hotplug it to the unit to sync again properly. Once you sync the new formatted drive, the array will be complete again and should not be degraded anymore."
Holy. Cow. My weekend is saved.
Now, then... to start this backup, before I issue a resync. :)
Marc_V
May 26, 2019NETGEAR Employee Retired
Hi Shadowlore
It seems there are issues with the RAID configuration or with the disks that failed but I would recommend contacting NETGEAR Support so they can properly assist you with fixing the RAID. Please note though that in the event Data recovery is needed, there would be a separate charge. Also, if you are out of warranty you may have to purchase a Support contract like a Pay-per-incident contract would do ($75).
You can contact them through my.netgear.com by creating an online case.
HTH
Regards
Shadowlore
May 26, 2019Guide
Yeah, already looking into that route, as well as a few others.
If the drives had come back as bad, via WD, then I'd at least understand it... but the part about it I don't really get, is that the data was mostly intact (thumbnail files were reporting as bad...) so I was able to copy MOST of the data off.
Powered up another ReadyNAS unit, to start a 10Gb Backup, but wanted to move the device to a different UPS, just in case. (got some storms rolling in), and after the reboot, it seems to have forgotten it's volumes, even when I've tried to mount them in read-only.
- StephenBMay 26, 2019Guru - Experienced User
This definitely is weird. Have you grabbed the log zip file?
Did you always have 6x4TB in the unit? I'm thinking that you might have started with some smaller drives (which would give you multiple RAID groups in the array). Then something happened that put one of multiple RAID groups out of sync.
FWIW, one thing I recently discovered with my WD Reds - sometimes unrecoverable reads don't end up incrementing the pending sector counts. I found several UNCs when I ran smartctl -x on one of the drives with ssh. When I tested that drive with Lifeguard, it failed. Someone else here tried that test (at my suggestion) and also uncovered a failed drive that way.
Also, in general I've found that the destructive write-zeros test in Lifeguard sometimes finds issues that the non-destructive test misses (and vice versa). Though you shouldn't run that test until you sort out your data loss.
- ShadowloreMay 26, 2019Guide
Yeah.. got the log prior to everything going sideways, and after. (if ya wanna look at them, let me know.. happy to share)
You are 100% correct. I originally had 4 drives in the unit (ported over from an Ultra4) and then over the years I've expanded and replaced drives as they die.Right now, I'm seeing 3 'volumes'.
Volume 1 (data) shows as being 0TB.
Volume 2 (data-0) shows as being 8TBVolume 3 (data-1) shows as being 8TB
The original volume (just 'data') was 19TB (actually running 6 drives, 4TB each, obviously)
Right now, my wife and daughter are about to kill me, since the backup hadn't ran in awhile (partially my fault, partially theirs)... and my daughter's entire senior year of school was on that volume.
I was able to recover a large number of files, prior to the reboot, but the photos were so large, the plan was just to reboot into read-only mode, and let all of the photos sync to our cloud storage... but needless to say, the reboot caused this weird split, now.
Ran an indepth scan on the drives, last night, and found that the 'failed' second drive is now showing a pending SMART error (but it wasn't showing it last night.. which is odd...), so I'm on my way to Microcenter to buy yet another drive.
If anyone has any recommendations, I'm more than up for suggestions, at this point. I know at least a chunk of the data is likely gone.. but at this point, I'm just trying to recover what I can. *facedesk*mdstat.log prior to reboot:
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid10 sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1] sda2[0]
1569792 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]
md126 : active raid5 sdf4[6](S) sdd4[0] sda4[5](F) sdb4[3] sdc4[2] sde4[1]
9766874560 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/4] [UUUU__]
md127 : active raid5 sdf3[10] sda3[11] sde3[7] sdd3[8] sdc3[9] sdb3[6]
9743313920 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
bitmap: 0/15 pages [0KB], 65536KB chunkmd0 : active raid1 sdf1[10] sda1[11] sde1[7] sdd1[8] sdc1[9] sdb1[6]
4190208 blocks super 1.2 [7/6] [UUUUUU_]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Tue Oct 8 22:29:30 2013
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 7
Total Devices : 6
Persistence : Superblock is persistentUpdate Time : Sat May 25 20:49:03 2019
State : clean, degraded
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0Name : ***REMOVED:0 (local to host ***REMOVED)
UUID : ***REMOVED
Events : 4368189Number Major Minor RaidDevice State
11 8 1 0 active sync /dev/sda1
10 8 81 1 active sync /dev/sdf1
6 8 17 2 active sync /dev/sdb1
9 8 33 3 active sync /dev/sdc1
8 8 49 4 active sync /dev/sdd1
7 8 65 5 active sync /dev/sde1
- 0 0 6 removed
/dev/md/data-0:
Version : 1.2
Creation Time : Tue Oct 8 22:29:31 2013
Raid Level : raid5
Array Size : 9743313920 (9291.95 GiB 9977.15 GB)
Used Dev Size : 1948662784 (1858.39 GiB 1995.43 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistentIntent Bitmap : Internal
Update Time : Sat May 25 16:40:56 2019
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 64KName : ***REMOVED:data-0 (local to host ***REMOVED)
UUID : ***REMOVED
Events : 49070Number Major Minor RaidDevice State
11 8 3 0 active sync /dev/sda3
10 8 83 1 active sync /dev/sdf3
6 8 19 2 active sync /dev/sdb3
9 8 35 3 active sync /dev/sdc3
8 8 51 4 active sync /dev/sdd3
7 8 67 5 active sync /dev/sde3
/dev/md/data-1:
Version : 1.2
Creation Time : Mon Jul 27 20:52:39 2015
Raid Level : raid5
Array Size : 9766874560 (9314.42 GiB 10001.28 GB)
Used Dev Size : 1953374912 (1862.88 GiB 2000.26 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistentUpdate Time : Sat May 25 20:48:30 2019
State : clean, FAILED
Active Devices : 4
Working Devices : 5
Failed Devices : 1
Spare Devices : 1Layout : left-symmetric
Chunk Size : 64KName : *** REMOVED:data-1 (local to host ***REMOVED)
UUID : *** REMOVED
Events : 39160Number Major Minor RaidDevice State
0 8 52 0 active sync /dev/sdd4
1 8 68 1 active sync /dev/sde4
2 8 36 2 active sync /dev/sdc4
3 8 20 3 active sync /dev/sdb4
- 0 0 4 removed
- 0 0 5 removed5 8 4 - faulty /dev/sda4
6 8 84 - spare /dev/sdf4MDSTAT.LOG POST REBOOT:
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid10 sde2[4] sdd2[3] sdc2[2] sdb2[1] sda2[0]
1308160 blocks super 1.2 512K chunks 2 near-copies [5/5] [UUUUU]
md0 : active raid1 sda1[10] sdb1[6] sde1[7] sdd1[8] sdc1[9]
4190208 blocks super 1.2 [7/5] [U_UUUU_]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Tue Oct 8 22:29:30 2013
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 7
Total Devices : 5
Persistence : Superblock is persistentUpdate Time : Sun May 26 02:32:42 2019
State : clean, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0Name : ***REMOVED:0 (local to host ***REMOVED)
UUID : ***REMOVED
Events : 4370599Number Major Minor RaidDevice State
10 8 1 0 active sync /dev/sda1
- 0 0 1 removed
6 8 17 2 active sync /dev/sdb1
9 8 33 3 active sync /dev/sdc1
8 8 49 4 active sync /dev/sdd1
7 8 65 5 active sync /dev/sde1
- 0 0 6 removed- StephenBMay 26, 2019Guru - Experienced User
I suggest cloning the disks with SMART errors (using a utility that does sector by sector cloning). Probably Netgear support is your best pathway to get the volume to mount (and if needed do data recovery).
Shadowlore wrote:
Yeah.. got the log prior to everything going sideways, and after. (if ya wanna look at them, let me know.. happy to share)
Probably JohnCM_S or Hopchen (former Netgear) are the right folks to take a look. Send the link in a PM.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!