NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Anonymous
Sep 15, 2020RN31400: data DEAD, All drive showing NEW healthy on reboot
NAS was showing 'data Dead' on LCD on inspection after noticing that it wasn't working. Turned it off and restarted the box. Rebooted, admin suite has the notification 'Remove inactive volumes to u...
- Sep 19, 2020
I don't think the NAS retains any state on this, other than what btrfs itself retains.
What you need to do is forcibly mount the array, and perhaps also attempt to repair the file system. This isn't something I've ever needed to do, so I can't really offer much advice.
If you boot up in tech support mode, you could try
# rnutil chroot # btrfs device scan # btrfs fi show # mount -t btrfs -o ro,recovery /dev/md127 /data
If you just go into ssh with the NAS running, then you'd skip the rnutil command.
One user here successfully used that mount command to mount a missing volume. If it works then it will mount the volume read-only so you'd need to offload the data.
Sandshark
Sep 15, 2020Sensei
Since you have access to the data, I think you are far better off doing a backup, factory default, and restore. I would not trust that you have taken care of every possible detail if you manually try to re-build the configuration files. And if you haven't, future expansion could be problematic.
FYI "data-0" is the name of the first MDADM RAID that makes up BTRFS volume "data". If you have two layers of RAID (from vertical expansion in the past), you may have "lost" the second one (data-1).
Anonymous
Sep 16, 2020Sandshark Thank you for your reply. Your help is utterly invaluable! I'm not sure I understand. I've only really looked at the web admin suite showing the following: https://i.imgur.com/MNs02tZ.jpg
The four identical drives have been in from the start, RAID 5 and at this point I will have no interest in expanding the device in the future. This box was chosen to avoid this kind of time sink, if a drive failed, slap in a new one and forget was intended. :( As you can see above when it was working I only had 'data' volume as RAID 5 with the usage tallied.
When it comes to backing up and reseting as you mention, how do I have access to data? Is there a process/guide that you could recommend? (sorry I am having difficulty googling this issue!) I'm guessing that the backup will probably requires roughly 4TB, I'll need to have that space available elsewhere... Could be time to get a new nas and put this one on ebay?
I'd love to know why the software decided that any hard drives disconnected considering the device is locked in a ventilated cupboard and only I have the key. Hey ho. :/
- StephenBSep 16, 2020Guru - Experienced User
Retired_Member wrote:
When it comes to backing up and reseting as you mention, how do I have access to data?
I think Sandshark was for some reason assuming that you did still have access to the data.
Retired_Member wrote:
I've only really looked at the web admin suite showing the following: https://i.imgur.com/MNs02tZ.jpg
Can you download the full log zip file (click on the logs page and you'll see a download control)?
The full disk smart stats are in disk_info.log, so you might look at those.
It would also be useful if you copy/paste mdstat.log into a reply here. It's best if you use the </> tool in the toolbar.
- AnonymousSep 16, 2020
StephenB No worries on Sandshark, I totally understand misunderstandings when helping at arms length! :) Probably my fault!
Here are the logs:
Disk_info.log
Device: sdc Controller: 0 Channel: 0 Model: WDC WD30EFRX-68EUZN0 Serial: WD-WCC4N4VF5VFL Firmware: 82.00A82W Class: SATA RPM: 5400 Sectors: 5860533168 Pool: data-0 PoolType: RAID 5 PoolState: 5 PoolHostId: 2fe5cdf4 Health data ATA Error Count: 0 Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 41 Start/Stop Count: 3541 Power-On Hours: 31420 Power Cycle Count: 35 Load Cycle Count: 3525 Device: sda Controller: 0 Channel: 1 Model: WDC WD30EFRX-68EUZN0 Serial: WD-WCC4N4SFKV58 Firmware: 82.00A82W Class: SATA RPM: 5400 Sectors: 5860533168 Pool: data-0 PoolType: RAID 5 PoolState: 5 PoolHostId: 2fe5cdf4 Health data ATA Error Count: 0 Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 43 Start/Stop Count: 3720 Power-On Hours: 31413 Power Cycle Count: 37 Load Cycle Count: 3705 Device: sdd Controller: 0 Channel: 2 Model: WDC WD30EFRX-68EUZN0 Serial: WD-WCC4N6EXP6UL Firmware: 82.00A82W Class: SATA RPM: 5400 Sectors: 5860533168 Pool: data-0 PoolType: RAID 5 PoolState: 5 PoolHostId: 2fe5cdf4 Health data ATA Error Count: 0 Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 43 Start/Stop Count: 3790 Power-On Hours: 30676 Power Cycle Count: 34 Load Cycle Count: 3770 Device: sdb Controller: 0 Channel: 3 Model: WDC WD30EFRX-68EUZN0 Serial: WD-WCC4N6EXPLT6 Firmware: 82.00A82W Class: SATA RPM: 5400 Sectors: 5860533168 Pool: data-0 PoolType: RAID 5 PoolState: 5 PoolHostId: 2fe5cdf4 Health data ATA Error Count: 0 Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 41 Start/Stop Count: 3734 Power-On Hours: 30640 Power Cycle Count: 37 Load Cycle Count: 3716
mdstat.log
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid10 sdd2[3] sdc2[2] sdb2[1] sda2[0] 1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] md0 : active raid1 sdd1[6] sdc1[4] sda1[1] sdb1[5] 4190208 blocks super 1.2 [4/4] [UUUU] unused devices: <none> /dev/md/0: Version : 1.2 Creation Time : Sat Dec 24 09:59:59 2016 Raid Level : raid1 Array Size : 4190208 (4.00 GiB 4.29 GB) Used Dev Size : 4190208 (4.00 GiB 4.29 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed Sep 16 11:27:42 2020 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Consistency Policy : unknown Name : 2fe5cdf4:0 (local to host 2fe5cdf4) UUID : 0306a872:cd2ef930:e5f2473d:32708412 Events : 649 Number Major Minor RaidDevice State 4 8 33 0 active sync /dev/sdc1 1 8 1 1 active sync /dev/sda1 6 8 49 2 active sync /dev/sdd1 5 8 17 3 active sync /dev/sdb1As I see it, the box had a special moment and erroniously decided two discs at the same time were disconnected hence giving a fail on the raid array. The data is presumably all there still.
Thank you for your time and assistance!
- StephenBSep 16, 2020Guru - Experienced User
So the disks do have healthy smart stats, but the array for the volume itself doesn't appear in mdstat.log. You are only showing the OS partition and the swap partition.
What you might want to do next is send a private message (PM) to one of the mods - either JohnCM_S or Marc_V - asking them to analyze the logs. Upload your log zip file to cloud storage (google drive, dropbox, etc), and include a download link in the PM. Also send a link to this thread.
You send a PM using the envelope link in the upper right of the forum page. Note you shouldn't post a link to the log zip publicly.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!