NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
tasone
Sep 23, 2019Aspirant
ReadyNas 104 Volume inactive or dead
I am hoping someone can help me. I was not able to access my network share that i have setup on my Readynas So I logged into my Readynas 104 and everythinig looked ok, so i did a quick reboot...
tasone
Sep 24, 2019Aspirant
Sandshark wrote:On the Volumes tab, how many volumes are shown (typically, you'll see "data" "data-0", etc.)?
On the Performance tab, hovering the mouse over the dot for each drive, which ones say they are a part of which volume?
Hi
On the volumes tab their is only one volume showing the cleverly name "Volumeone" (all showing red)
On the Performance tab
Disk 1 - is part of volumeone
Disk 2 - is part of volumeone
Disk 3 - is part of volumeone
Disk 4 - is part of volumeone
On this tab all disks are showing as green with no errors.
Thanks
Andrew
StephenB
Sep 24, 2019Guru - Experienced User
Can you download the full log zip file, and post the contents of mdstat.log here? Just copy/paste it into a reply (the forum won't accept .log files). Don't post a downloadable link to the full zip, as there is some stuff in there that shouldn't be publicly posted.
- tasoneSep 24, 2019Aspiranthere is the log file mdstat.logPersonalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active raid5 sdd3[0] sdb3[2] sda3[3] sdc3[4]
11706506496 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md1 : active raid10 sda2[4] sdc2[3] sdd2[2] sdb2[1]
1046528 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md0 : active raid1 sdd1[0] sdc1[4] sdb1[2] sda1[5]
4190208 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Thu Sep 10 12:41:44 2015
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistentUpdate Time : Mon Sep 23 18:40:06 2019
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0Consistency Policy : unknownName : 2fe64c78:0 (local to host 2fe64c78)
UUID : 7c69389f:26cc0e69:b33e47e8:1b7c9c3f
Events : 1791Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
5 8 1 1 active sync /dev/sda1
2 8 17 2 active sync /dev/sdb1
4 8 33 3 active sync /dev/sdc1
/dev/md/1:
Version : 1.2
Creation Time : Mon Mar 11 13:54:24 2019
Raid Level : raid10
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 523264 (511.00 MiB 535.82 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistentUpdate Time : Mon Sep 23 17:41:48 2019
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0Layout : near=2
Chunk Size : 512KConsistency Policy : unknownName : 2fe64c78:1 (local to host 2fe64c78)
UUID : c612366e:755998df:e3e83b5b:953c00f9
Events : 42Number Major Minor RaidDevice State
4 8 2 0 active sync set-A /dev/sda2
1 8 18 1 active sync set-B /dev/sdb2
2 8 50 2 active sync set-A /dev/sdd2
3 8 34 3 active sync set-B /dev/sdc2
/dev/md/Volumeone-0:
Version : 1.2
Creation Time : Thu Sep 10 14:09:36 2015
Raid Level : raid5
Array Size : 11706506496 (11164.19 GiB 11987.46 GB)
Used Dev Size : 3902168832 (3721.40 GiB 3995.82 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistentUpdate Time : Mon Sep 23 18:25:08 2019
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 64KConsistency Policy : unknownName : 2fe64c78:Volumeone-0 (local to host 2fe64c78)
UUID : 527ab021:cd91839c:042685ab:432b4a60
Events : 14117Number Major Minor RaidDevice State
0 8 51 0 active sync /dev/sdd3
4 8 35 1 active sync /dev/sdc3
3 8 3 2 active sync /dev/sda3
2 8 19 3 active sync /dev/sdb3- StephenBSep 24, 2019Guru - Experienced User
tasone wrote:
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active raid5 sdd3[0] sdb3[2] sda3[3] sdc3[4]
...This is curious, as it is showing that all four drives are still part of the RAID array. Maybe also look in system.log and kernel.log for BTRFS errors?
One option is to send a private message to one of the mods ( JohnCM_S or Marc_V ) with a downloadable link to the full log zip file, and ask them to review it for you. Don't post the link publicly. You send a PM using the envelope icon in the upper right of the forum page.
Unfortunately I think you will probably need the help of paid support to mount your volume.
- tasoneSep 24, 2019Aspirant
kernel.log is full of btrfs errors
Sep 23 18:25:09 TD-Archive kernel: BTRFS critical (device md127): unable to find logical 16188839624704 len 4096
Sep 23 18:25:09 TD-Archive kernel: BTRFS critical (device md127): unable to find logical 16188839624704 len 4096
Sep 23 18:25:09 TD-Archive kernel: BTRFS critical (device md127): unable to find logical 16188839624704 len 4096
Sep 23 18:25:09 TD-Archive kernel: BTRFS critical (device md127): unable to find logical 16188839624704 len 4096
Sep 23 18:25:09 TD-Archive kernel: BTRFS warning (device md127): failed to read tree root
Sep 23 18:25:09 TD-Archive kernel: BTRFS critical (device md127): unable to find logical 16188835102720 len 4096
Sep 23 18:25:09 TD-Archive kernel: BTRFS critical (device md127): unable to find logical 16188835102720 len 4096
Sep 23 18:25:09 TD-Archive kernel: BTRFS critical (device md127): unable to find logical 16188835102720 len 4096
Sep 23 18:25:09 TD-Archive kernel: BTRFS critical (device md127): unable to find logical 16188835102720 len 4096
Sep 23 18:25:09 TD-Archive kernel: BTRFS critical (device md127): unable to find logical 16188835102720 len 4096
Sep 23 18:25:09 TD-Archive kernel: BTRFS critical (device md127): unable to find logical 16188835102720 len 4096
Sep 23 18:25:09 TD-Archive kernel: BTRFS warning (device md127): failed to read tree root
Sep 23 18:25:09 TD-Archive kernel: BTRFS error (device md127): open_ctree failed
Sep 23 18:25:17 TD-Archive kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Sep 23 18:25:21 TD-Archive kernel: mvneta d0070000.ethernet eth0: Link is Up - 1Gbps/Full - flow control off
Sep 23 18:25:21 TD-Archive kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readyand the system log is full of failed path error
Sep 23 18:39:28 TD-Archive snapperd[3153]: THROW: open failed path:/Volumeone/MasterBackups errno:2 (No such file or directory)
Sep 23 18:39:28 TD-Archive snapperd[3153]: reading failed
Sep 23 18:39:28 TD-Archive snapperd[3153]: THROW: open failed path:/Volumeone/Google-Vault errno:2 (No such file or directory)
Sep 23 18:39:28 TD-Archive snapperd[3153]: reading failed
Sep 23 18:39:28 TD-Archive snapperd[3153]: THROW: open failed path:/Volumeone/OneDrive-Vault errno:2 (No such file or directory)
Sep 23 18:39:28 TD-Archive snapperd[3153]: reading failed
Sep 23 18:40:03 TD-Archive apache_access[2593]: 192.168.45.201 "POST /dbbroker HTTP/1.1" 401
Sep 23 18:40:10 TD-Archive rnutil[5285]: Failed to get tier from /dev/md0 : 25(just small parts of the logs)
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!