× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

tom72
Initiate

ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

Hi,

 

I have 2 SATA 2TB drives. 

 

When I login to the NAS I get: Remove inactive volumes to use the disk. Disk #1,2.

 

I've double checked to ensure there are no USBs etc plugged into the machine. I've upgraded the firmware to the latest (6.7.4)

 

The drives have re-synced however there are 2 RED HDDS showing when I look in "Volumes"

 

I have the chance to format the drives however I don't want to as I have data on there that I can't lose

 

When I look at the lights, there are 2 solid blue LED lights on each hard drive number.

 

Thanks

 

 

Message 1 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

I set up my RN316 a few days ago under the (then current) firmware 6.7.3.

X-RAID2 setup (one of the reasons I wanted this NAS was to use different sized drives).

The drives were all put in at once:

2x 2GB (Ch 5 & 6), 2x 3GB (Ch 3 & 4), 2x 4GB (Ch 1 & 2)

Although it took 26 hours to get all the disks sorted out [edit: reshape], the NAS seemed to be working fine, I only had a small amount of test data on it.

 

I updated the firmware last night to 6.7.4, thinking the right time to do so was

a) when there was little data on the drives, and

b) when there wasn't any data to lose, and

c) to see how seamless NetGear's firmware updates were in a low risk situation, and

d) to get the RN316 onto the latest firmware.

 

Now I show a similar error to tom72.

- [edit: I can access the NAS local admin page at 10.0.1.xx in a browser OK.]

- all six HDDs show as red in System/Volumes

- I now have a warning saying "Remove inactive volumes to use the disk. Disk #1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 1, 2"

- instead of a single summary box on the left hand side of the System/Volumes page, one that previously showed my working setup, I now have two boxes. The top one has Data  = 0, Free Space = 0, Type = RAID 5. The Bottom one has Data  = 0, Free Space = 0, Type = RAID Unknown.

- whenever I try and use any other tab, a warning pops up saying "No volume exists. NETGEAR Recommends that you create a volume before configuring others. Navigate to the System > Volumes page to create a volume."

 

I'm pretty glad I had the opportunity to do a firmware update when nothing important was on the NAS, if this is an indication of the likely outcome of a simple firmware update! My confidence in the product is rather shaken right now, but perhaps this is simple, not too unusual, and it's simply my low level of familiarity with the RN316.

 

So, I have no idea what to do.

 

I would like to:

1. Understand what happened.

2. Understand what to do.

3. Get the NAS working again ASAP so I can start to use it.

 

Any help would be very much appreciated.

 

[edit: this link http://prosupport.netgear.com/pdf/DS_ProSupport_US_Intl.pdf suggests that, as I bought the unit in Australia just inside the window of dates in note 2 I am able to access warranty tech support 24/7 chat for the life of my product]

Model: RN31661D|ReadyNAS 316 6-Bay
Message 2 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

Screen Shot 2017-05-30 at 11.24.30 .png

Model: RN31600|ReadyNAS 300 Series 6- Bay
Message 3 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

[edited for clarity]

 

Screenshot of local admin page System > Overview.

 

Screen Shot 2017-05-30 at 11.26.12 .png

 

 

Screenshot of RAIDar.

 

Screen Shot 2017-05-30 at 11.47.25 .png

Message 4 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

I've just looked at the logs, they're pretty short, and I wonder if the fact that I had the Disk Spin Down feature (Admin > System > Power > Disk Spin-down) enabled may have caused a problem during the firmware update?

 

Power Timer was not enabled.

 

I have disabled Disk Spin Down for the moment, just in case, but would like to use it to save power in everyday use. The NAS will be likely to be accessed sporadically (a few times a day perhsps) and unlikely to be accessed at all during the night.

 

Happy to PM the log file to anyone in a position to provide advice.

 

Message 5 of 20
tom72
Initiate

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

I have the same issue. I don't think the Disk Spin would be the error as it was working fine before I upgraded the firmware

Message 6 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

Thanks Tom.

 

I figured that the Disk Spin Down wouldn't be a factor, and agree that it's probably not, but as it was one of the few options that I had ticked I thought it best to include that info.

 

Is your problem the same in that the NAS was working fine under 6.7.3, but after firmware upgrade to 6.7.4 you now have a bunch of red disks and the "Remove inactive volumes..." error message?

 

I'll always ensure I have backups (onsite and cloud) but this isn't a good look for the NAS right now. Sigh.

Message 7 of 20
jak0lantash
Mentor

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

@RandomNASChap I think you did already, @tom72 maybe you want to upvote this "idea":

https://community.netgear.com/t5/Idea-Exchange-for-ReadyNAS/Change-the-incredibly-confusing-error-me...

 

Download the logs from the GUI (System / Logs / Download Logs), look inside dmesg.log and search for md127. You can post an extract here if you want me to analyse.

Message 8 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

Thanks @jak0lantash for the response.

 

Yes, I did upvote that idea.

 

Here's the just entries in dmesg.log with md127 included:

[Tue May 30 10:45:19 2017] md: md127 stopped.
[Tue May 30 10:45:19 2017] md/raid1:md127: active with 2 out of 2 mirrors
[Tue May 30 10:45:19 2017] md127: detected capacity change from 0 to 1000059830272
[Tue May 30 10:45:19 2017] BTRFS: device label 0ed5f04c:data devid 3 transid 1295 /dev/md127

Message 9 of 20
jak0lantash
Mentor

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.


@RandomNASChap wrote:

Thanks @jak0lantash for the response.

 

Yes, I did upvote that idea.

 

Here's the just entries in dmesg.log with md127 included:

[Tue May 30 10:45:19 2017] md: md127 stopped.
[Tue May 30 10:45:19 2017] md/raid1:md127: active with 2 out of 2 mirrors
[Tue May 30 10:45:19 2017] md127: detected capacity change from 0 to 1000059830272
[Tue May 30 10:45:19 2017] BTRFS: device label 0ed5f04c:data devid 3 transid 1295 /dev/md127


OK. That won't be enough in your case. Please also check md126 and md125.

Message 10 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

Here's an extract from that same log file around and including the time of the "md127" events, which was [tue may 30 10:45:

I've bolded the four entries that include md127.

Please tell me if you'd like more.

 

[Tue May 30 10:45:00 2017] scsi 8:0:0:0: Direct-Access SMI USB DISK 1100 PQ: 0 ANSI: 0 CCS
[Tue May 30 10:45:00 2017] sd 8:0:0:0: Attached scsi generic sg6 type 0
[Tue May 30 10:45:00 2017] sd 8:0:0:0: Embedded Enclosure Device
[Tue May 30 10:45:00 2017] sd 8:0:0:0: Wrong diagnostic page; asked for 1 got 0
[Tue May 30 10:45:00 2017] sd 8:0:0:0: [sdg] 489472 512-byte logical blocks: (251 MB/239 MiB)
[Tue May 30 10:45:00 2017] sd 8:0:0:0: Failed to get diagnostic page 0xffffffea
[Tue May 30 10:45:00 2017] sd 8:0:0:0: [sdg] Write Protect is off
[Tue May 30 10:45:00 2017] sd 8:0:0:0: [sdg] Mode Sense: 43 00 00 00
[Tue May 30 10:45:00 2017] sd 8:0:0:0: Failed to bind enclosure -19
[Tue May 30 10:45:00 2017] sd 8:0:0:0: [sdg] No Caching mode page found
[Tue May 30 10:45:00 2017] sd 8:0:0:0: [sdg] Assuming drive cache: write through
[Tue May 30 10:45:00 2017] sdg: sdg1
[Tue May 30 10:45:00 2017] sd 8:0:0:0: [sdg] Attached SCSI removable disk
[Tue May 30 10:45:03 2017] ata6.00: exception Emask 0x0 SAct 0x10000000 SErr 0x0 action 0x0
[Tue May 30 10:45:03 2017] ata6.00: irq_stat 0x40000008
[Tue May 30 10:45:03 2017] ata6.00: failed command: READ FPDMA QUEUED
[Tue May 30 10:45:03 2017] ata6.00: cmd 60/08:e0:a8:88:e0/00:00:e8:00:00/40 tag 28 ncq 4096 in
[Tue May 30 10:45:03 2017] ata6.00: status: { DRDY ERR }
[Tue May 30 10:45:03 2017] ata6.00: error: { UNC }
[Tue May 30 10:45:03 2017] ata6.00: configured for UDMA/133
[Tue May 30 10:45:03 2017] sd 5:0:0:0: [sdf] tag#28 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Tue May 30 10:45:03 2017] sd 5:0:0:0: [sdf] tag#28 Sense Key : Medium Error [current] [descriptor]
[Tue May 30 10:45:03 2017] sd 5:0:0:0: [sdf] tag#28 Add. Sense: Unrecovered read error - auto reallocate failed
[Tue May 30 10:45:03 2017] sd 5:0:0:0: [sdf] tag#28 CDB: Read(10) 28 00 e8 e0 88 a8 00 00 08 00
[Tue May 30 10:45:03 2017] blk_update_request: I/O error, dev sdf, sector 3907029160
[Tue May 30 10:45:03 2017] Buffer I/O error on dev sdf, logical block 488378645, async page read
[Tue May 30 10:45:03 2017] Alternate GPT is invalid, using primary GPT.
[Tue May 30 10:45:03 2017] ata6: EH complete
[Tue May 30 10:45:03 2017] sdf: sdf1 sdf2 sdf3
[Tue May 30 10:45:03 2017] sd 5:0:0:0: [sdf] Attached SCSI disk
[Tue May 30 10:45:03 2017] Freeing unused kernel memory: 852K (ffffffff88f49000 - ffffffff8901e000)
[Tue May 30 10:45:03 2017] vpd: module license 'Proprietary' taints kernel.
[Tue May 30 10:45:03 2017] Disabling lock debugging due to kernel taint
[Tue May 30 10:45:03 2017] ReadyNAS VPD init
[Tue May 30 10:45:03 2017] sx8635_probe()
[Tue May 30 10:45:03 2017] sx8635 0-002b: probe()
[Tue May 30 10:45:03 2017] input: Semtech Cap Button as /devices/pci0000:00/0000:00:1f.3/i2c-0/0-002b/input/input6
[Tue May 30 10:45:03 2017] sx86XX_init()
[Tue May 30 10:45:07 2017] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready
[Tue May 30 10:45:07 2017] 8021q: adding VLAN 0 to HW filter on device eth1
[Tue May 30 10:45:07 2017] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[Tue May 30 10:45:07 2017] 8021q: adding VLAN 0 to HW filter on device eth0
[Tue May 30 10:45:10 2017] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[Tue May 30 10:45:10 2017] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[Tue May 30 10:45:17 2017] md: md0 stopped.
[Tue May 30 10:45:17 2017] md: bind<sdb1>
[Tue May 30 10:45:17 2017] md: bind<sdc1>
[Tue May 30 10:45:17 2017] md: bind<sdd1>
[Tue May 30 10:45:17 2017] md: bind<sde1>
[Tue May 30 10:45:17 2017] md: bind<sdf1>
[Tue May 30 10:45:17 2017] md: bind<sda1>
[Tue May 30 10:45:17 2017] md/raid1:md0: active with 6 out of 6 mirrors
[Tue May 30 10:45:17 2017] md0: detected capacity change from 0 to 4290772992
[Tue May 30 10:45:17 2017] md: md1 stopped.
[Tue May 30 10:45:17 2017] md: bind<sdb2>
[Tue May 30 10:45:17 2017] md: bind<sdc2>
[Tue May 30 10:45:17 2017] md: bind<sdd2>
[Tue May 30 10:45:17 2017] md: bind<sde2>
[Tue May 30 10:45:17 2017] md: bind<sdf2>
[Tue May 30 10:45:17 2017] md: bind<sda2>
[Tue May 30 10:45:17 2017] md/raid:md1: device sda2 operational as raid disk 0
[Tue May 30 10:45:17 2017] md/raid:md1: device sdf2 operational as raid disk 5
[Tue May 30 10:45:17 2017] md/raid:md1: device sde2 operational as raid disk 4
[Tue May 30 10:45:17 2017] md/raid:md1: device sdd2 operational as raid disk 3
[Tue May 30 10:45:17 2017] md/raid:md1: device sdc2 operational as raid disk 2
[Tue May 30 10:45:17 2017] md/raid:md1: device sdb2 operational as raid disk 1
[Tue May 30 10:45:17 2017] md/raid:md1: allocated 6474kB
[Tue May 30 10:45:17 2017] md/raid:md1: raid level 6 active with 6 out of 6 devices, algorithm 2
[Tue May 30 10:45:17 2017] RAID conf printout:
[Tue May 30 10:45:17 2017] --- level:6 rd:6 wd:6
[Tue May 30 10:45:17 2017] disk 0, o:1, dev:sda2
[Tue May 30 10:45:17 2017] disk 1, o:1, dev:sdb2
[Tue May 30 10:45:17 2017] disk 2, o:1, dev:sdc2
[Tue May 30 10:45:17 2017] disk 3, o:1, dev:sdd2
[Tue May 30 10:45:17 2017] disk 4, o:1, dev:sde2
[Tue May 30 10:45:17 2017] disk 5, o:1, dev:sdf2
[Tue May 30 10:45:17 2017] md1: detected capacity change from 0 to 2145124352
[Tue May 30 10:45:17 2017] BTRFS: device label 0ed5f04c:root devid 1 transid 4720 /dev/md0
[Tue May 30 10:45:17 2017] BTRFS info (device md0): has skinny extents
[Tue May 30 10:45:18 2017] systemd[1]: Failed to insert module 'kdbus': Function not implemented
[Tue May 30 10:45:18 2017] systemd[1]: systemd 230 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
[Tue May 30 10:45:18 2017] systemd[1]: Detected architecture x86-64.
[Tue May 30 10:45:18 2017] systemd[1]: Set hostname to <BrisvegasNAS>.
[Tue May 30 10:45:18 2017] systemd[1]: Reached target Encrypted Volumes.
[Tue May 30 10:45:18 2017] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[Tue May 30 10:45:18 2017] systemd[1]: Created slice User and Session Slice.
[Tue May 30 10:45:18 2017] systemd[1]: Created slice System Slice.
[Tue May 30 10:45:18 2017] systemd[1]: Reached target Slices.
[Tue May 30 10:45:18 2017] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
[Tue May 30 10:45:18 2017] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
[Tue May 30 10:45:18 2017] systemd[1]: Listening on udev Kernel Socket.
[Tue May 30 10:45:18 2017] systemd[1]: Listening on Journal Socket.
[Tue May 30 10:45:18 2017] systemd[1]: Starting Load Kernel Modules...
[Tue May 30 10:45:18 2017] systemd[1]: Starting MD arrays...
[Tue May 30 10:45:18 2017] systemd[1]: Starting Remount Root and Kernel File Systems...
[Tue May 30 10:45:18 2017] systemd[1]: Mounting POSIX Message Queue File System...
[Tue May 30 10:45:18 2017] systemd[1]: Started ReadyNAS LCD splasher.
[Tue May 30 10:45:18 2017] systemd[1]: Reached target Paths.
[Tue May 30 10:45:18 2017] systemd[1]: Listening on udev Control Socket.
[Tue May 30 10:45:18 2017] systemd[1]: Listening on /dev/initctl Compatibility Named Pipe.
[Tue May 30 10:45:18 2017] systemd[1]: Mounting Debug File System...
[Tue May 30 10:45:18 2017] systemd[1]: Reached target Remote File Systems (Pre).
[Tue May 30 10:45:18 2017] systemd[1]: Reached target Remote File Systems.
[Tue May 30 10:45:18 2017] systemd[1]: Created slice system-serial\x2dgetty.slice.
[Tue May 30 10:45:18 2017] systemd[1]: Starting ReadyNASOS system prep...
[Tue May 30 10:45:18 2017] systemd[1]: Listening on Journal Socket (/dev/log).
[Tue May 30 10:45:18 2017] systemd[1]: Starting Journal Service...
[Tue May 30 10:45:18 2017] systemd[1]: Created slice system-getty.slice.
[Tue May 30 10:45:18 2017] systemd[1]: Starting Create list of required static device nodes for the current kernel...
[Tue May 30 10:45:18 2017] systemd[1]: Mounted Debug File System.
[Tue May 30 10:45:18 2017] systemd[1]: Mounted POSIX Message Queue File System.
[Tue May 30 10:45:18 2017] systemd[1]: Started Load Kernel Modules.
[Tue May 30 10:45:18 2017] systemd[1]: Started Remount Root and Kernel File Systems.
[Tue May 30 10:45:18 2017] systemd[1]: Started Create list of required static device nodes for the current kernel.
[Tue May 30 10:45:18 2017] systemd[1]: Started ReadyNASOS system prep.
[Tue May 30 10:45:18 2017] systemd[1]: Starting Create Static Device Nodes in /dev...
[Tue May 30 10:45:18 2017] systemd[1]: Starting Load/Save Random Seed...
[Tue May 30 10:45:18 2017] systemd[1]: Starting Rebuild Hardware Database...
[Tue May 30 10:45:18 2017] systemd[1]: Starting Apply Kernel Variables...
[Tue May 30 10:45:18 2017] systemd[1]: Mounting Configuration File System...
[Tue May 30 10:45:18 2017] systemd[1]: Mounting FUSE Control File System...
[Tue May 30 10:45:18 2017] systemd[1]: Mounted Configuration File System.
[Tue May 30 10:45:18 2017] systemd[1]: Mounted FUSE Control File System.
[Tue May 30 10:45:18 2017] systemd[1]: Started Load/Save Random Seed.
[Tue May 30 10:45:18 2017] systemd[1]: Started Apply Kernel Variables.
[Tue May 30 10:45:18 2017] systemd[1]: Started Create Static Device Nodes in /dev.
[Tue May 30 10:45:18 2017] systemd[1]: Starting udev Kernel Device Manager...
[Tue May 30 10:45:18 2017] systemd[1]: Started udev Kernel Device Manager.
[Tue May 30 10:45:19 2017] md: md127 stopped.
[Tue May 30 10:45:19 2017] systemd[1]: Started Rebuild Hardware Database.
[Tue May 30 10:45:19 2017] md: bind<sdb5>
[Tue May 30 10:45:19 2017] md: bind<sda5>
[Tue May 30 10:45:19 2017] md/raid1:md127: active with 2 out of 2 mirrors
[Tue May 30 10:45:19 2017] md127: detected capacity change from 0 to 1000059830272
[Tue May 30 10:45:19 2017] BTRFS: device label 0ed5f04c:data devid 3 transid 1295 /dev/md127
[Tue May 30 10:45:19 2017] systemd[1]: Found device /dev/disk/by-label/0ed5f04c:data.
[Tue May 30 10:45:19 2017] systemd[1]: Started Journal Service.
[Tue May 30 10:45:19 2017] md: md126 stopped.

Message 11 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

Continued for around that time period...

 

[Tue May 30 10:45:19 2017] md: bind<sdb4>
[Tue May 30 10:45:19 2017] md: bind<sdc4>
[Tue May 30 10:45:19 2017] md: bind<sdd4>
[Tue May 30 10:45:19 2017] md: bind<sda4>
[Tue May 30 10:45:19 2017] md/raid:md126: device sda4 operational as raid disk 0
[Tue May 30 10:45:19 2017] md/raid:md126: device sdd4 operational as raid disk 3
[Tue May 30 10:45:19 2017] md/raid:md126: device sdc4 operational as raid disk 2
[Tue May 30 10:45:19 2017] md/raid:md126: device sdb4 operational as raid disk 1
[Tue May 30 10:45:19 2017] md/raid:md126: allocated 4362kB
[Tue May 30 10:45:19 2017] md/raid:md126: raid level 5 active with 4 out of 4 devices, algorithm 2
[Tue May 30 10:45:19 2017] RAID conf printout:
[Tue May 30 10:45:19 2017] --- level:5 rd:4 wd:4
[Tue May 30 10:45:19 2017] disk 0, o:1, dev:sda4
[Tue May 30 10:45:19 2017] disk 1, o:1, dev:sdb4
[Tue May 30 10:45:19 2017] disk 2, o:1, dev:sdc4
[Tue May 30 10:45:19 2017] disk 3, o:1, dev:sdd4
[Tue May 30 10:45:19 2017] md126: detected capacity change from 0 to 3000179490816
[Tue May 30 10:45:19 2017] systemd-journald[1424]: Received request to flush runtime journal from PID 1
[Tue May 30 10:45:20 2017] Adding 2094844k swap on /dev/md1. Priority:-1 extents:1 across:2094844k
[Tue May 30 10:45:20 2017] BTRFS: device label 0ed5f04c:data devid 2 transid 1295 /dev/md126
[Tue May 30 10:45:25 2017] ata6.00: exception Emask 0x0 SAct 0x2000 SErr 0x0 action 0x0
[Tue May 30 10:45:25 2017] ata6.00: irq_stat 0x40000008
[Tue May 30 10:45:25 2017] ata6.00: failed command: READ FPDMA QUEUED
[Tue May 30 10:45:25 2017] ata6.00: cmd 60/08:68:c0:87:e0/00:00:e8:00:00/40 tag 13 ncq 4096 in
[Tue May 30 10:45:25 2017] ata6.00: status: { DRDY ERR }
[Tue May 30 10:45:25 2017] ata6.00: error: { UNC }
[Tue May 30 10:45:25 2017] ata6.00: configured for UDMA/133
[Tue May 30 10:45:25 2017] sd 5:0:0:0: [sdf] tag#13 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Tue May 30 10:45:25 2017] sd 5:0:0:0: [sdf] tag#13 Sense Key : Medium Error [current] [descriptor]
[Tue May 30 10:45:25 2017] sd 5:0:0:0: [sdf] tag#13 Add. Sense: Unrecovered read error - auto reallocate failed
[Tue May 30 10:45:25 2017] sd 5:0:0:0: [sdf] tag#13 CDB: Read(10) 28 00 e8 e0 87 c0 00 00 08 00
[Tue May 30 10:45:25 2017] blk_update_request: I/O error, dev sdf, sector 3907028928
[Tue May 30 10:45:25 2017] ata6: EH complete
[Tue May 30 10:45:25 2017] md: md125 stopped.
[Tue May 30 10:45:25 2017] md: bind<sdb3>
[Tue May 30 10:45:25 2017] md: bind<sdc3>
[Tue May 30 10:45:25 2017] md: bind<sdd3>
[Tue May 30 10:45:25 2017] md: bind<sde3>
[Tue May 30 10:45:25 2017] md: bind<sdf3>
[Tue May 30 10:45:25 2017] md: bind<sda3>
[Tue May 30 10:45:25 2017] md/raid:md125: device sda3 operational as raid disk 0
[Tue May 30 10:45:25 2017] md/raid:md125: device sdf3 operational as raid disk 5
[Tue May 30 10:45:25 2017] md/raid:md125: device sde3 operational as raid disk 4
[Tue May 30 10:45:25 2017] md/raid:md125: device sdd3 operational as raid disk 3
[Tue May 30 10:45:25 2017] md/raid:md125: device sdc3 operational as raid disk 2
[Tue May 30 10:45:25 2017] md/raid:md125: device sdb3 operational as raid disk 1
[Tue May 30 10:45:25 2017] md/raid:md125: allocated 6474kB
[Tue May 30 10:45:25 2017] md/raid:md125: raid level 5 active with 6 out of 6 devices, algorithm 2
[Tue May 30 10:45:25 2017] RAID conf printout:
[Tue May 30 10:45:25 2017] --- level:5 rd:6 wd:6
[Tue May 30 10:45:25 2017] disk 0, o:1, dev:sda3
[Tue May 30 10:45:25 2017] disk 1, o:1, dev:sdb3
[Tue May 30 10:45:25 2017] disk 2, o:1, dev:sdc3
[Tue May 30 10:45:25 2017] disk 3, o:1, dev:sdd3
[Tue May 30 10:45:25 2017] disk 4, o:1, dev:sde3
[Tue May 30 10:45:25 2017] disk 5, o:1, dev:sdf3
[Tue May 30 10:45:25 2017] md125: detected capacity change from 0 to 9977163939840
[Tue May 30 10:45:25 2017] md/raid:md125: read error corrected (8 sectors at 3897591680 on sdf3)
[Tue May 30 10:45:25 2017] md/raid:md125: read error corrected (8 sectors at 3897591792 on sdf3)
[Tue May 30 10:45:25 2017] md/raid:md125: read error corrected (8 sectors at 3897591800 on sdf3)
[Tue May 30 10:45:25 2017] md/raid:md125: read error corrected (8 sectors at 3897591744 on sdf3)
[Tue May 30 10:45:30 2017] BTRFS info (device md126): has skinny extents
[Tue May 30 10:45:30 2017] BTRFS error (device md126): failed to read the system array: -5
[Tue May 30 10:45:30 2017] BTRFS error (device md126): open_ctree failed
[Tue May 30 10:45:30 2017] ata6.00: exception Emask 0x0 SAct 0x400000 SErr 0x0 action 0x0
[Tue May 30 10:45:30 2017] ata6.00: irq_stat 0x40000008
[Tue May 30 10:45:30 2017] ata6.00: failed command: READ FPDMA QUEUED
[Tue May 30 10:45:30 2017] ata6.00: cmd 60/08:b0:30:86:e0/00:00:e8:00:00/40 tag 22 ncq 4096 in
[Tue May 30 10:45:30 2017] ata6.00: status: { DRDY ERR }
[Tue May 30 10:45:30 2017] ata6.00: error: { UNC }
[Tue May 30 10:45:30 2017] ata6.00: configured for UDMA/133
[Tue May 30 10:45:30 2017] sd 5:0:0:0: [sdf] tag#22 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Tue May 30 10:45:30 2017] sd 5:0:0:0: [sdf] tag#22 Sense Key : Medium Error [current] [descriptor]
[Tue May 30 10:45:30 2017] sd 5:0:0:0: [sdf] tag#22 Add. Sense: Unrecovered read error - auto reallocate failed
[Tue May 30 10:45:30 2017] sd 5:0:0:0: [sdf] tag#22 CDB: Read(10) 28 00 e8 e0 86 30 00 00 08 00
[Tue May 30 10:45:30 2017] blk_update_request: I/O error, dev sdf, sector 3907028528
[Tue May 30 10:45:30 2017] ata6: EH complete
[Tue May 30 10:45:35 2017] ata6.00: exception Emask 0x0 SAct 0x8000000 SErr 0x0 action 0x0
[Tue May 30 10:45:35 2017] ata6.00: irq_stat 0x40000008
[Tue May 30 10:45:35 2017] ata6.00: failed command: READ FPDMA QUEUED
[Tue May 30 10:45:35 2017] ata6.00: cmd 60/08:d8:30:86:e0/00:00:e8:00:00/40 tag 27 ncq 4096 in
[Tue May 30 10:45:35 2017] ata6.00: status: { DRDY ERR }
[Tue May 30 10:45:35 2017] ata6.00: error: { UNC }
[Tue May 30 10:45:35 2017] ata6.00: configured for UDMA/133
[Tue May 30 10:45:35 2017] sd 5:0:0:0: [sdf] tag#27 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Tue May 30 10:45:35 2017] sd 5:0:0:0: [sdf] tag#27 Sense Key : Medium Error [current] [descriptor]
[Tue May 30 10:45:35 2017] sd 5:0:0:0: [sdf] tag#27 Add. Sense: Unrecovered read error - auto reallocate failed
[Tue May 30 10:45:35 2017] sd 5:0:0:0: [sdf] tag#27 CDB: Read(10) 28 00 e8 e0 86 30 00 00 08 00
[Tue May 30 10:45:35 2017] blk_update_request: I/O error, dev sdf, sector 3907028528
[Tue May 30 10:45:35 2017] ata6: EH complete
[Tue May 30 10:45:36 2017] BTRFS: device label 0ed5f04c:data devid 1 transid 1295 /dev/md125
[Tue May 30 10:45:40 2017] ata6.00: exception Emask 0x0 SAct 0x600000 SErr 0x0 action 0x0
[Tue May 30 10:45:40 2017] ata6.00: irq_stat 0x40000008
[Tue May 30 10:45:40 2017] ata6.00: failed command: READ FPDMA QUEUED
[Tue May 30 10:45:40 2017] ata6.00: cmd 60/08:a8:00:88:e0/00:00:e8:00:00/40 tag 21 ncq 4096 in
[Tue May 30 10:45:40 2017] ata6.00: status: { DRDY ERR }
[Tue May 30 10:45:40 2017] ata6.00: error: { UNC }
[Tue May 30 10:45:40 2017] ata6.00: configured for UDMA/133
[Tue May 30 10:45:40 2017] sd 5:0:0:0: [sdf] tag#21 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Tue May 30 10:45:40 2017] sd 5:0:0:0: [sdf] tag#21 Sense Key : Medium Error [current] [descriptor]
[Tue May 30 10:45:40 2017] sd 5:0:0:0: [sdf] tag#21 Add. Sense: Unrecovered read error - auto reallocate failed
[Tue May 30 10:45:40 2017] sd 5:0:0:0: [sdf] tag#21 CDB: Read(10) 28 00 e8 e0 88 00 00 00 08 00
[Tue May 30 10:45:40 2017] blk_update_request: I/O error, dev sdf, sector 3907028992
[Tue May 30 10:45:40 2017] ata6: EH complete
[Tue May 30 10:45:46 2017] ata6.00: exception Emask 0x0 SAct 0xa000000 SErr 0x0 action 0x0
[Tue May 30 10:45:46 2017] ata6.00: irq_stat 0x40000008
[Tue May 30 10:45:46 2017] ata6.00: failed command: READ FPDMA QUEUED
[Tue May 30 10:45:46 2017] ata6.00: cmd 60/08:c8:c0:87:e0/00:00:e8:00:00/40 tag 25 ncq 4096 in
[Tue May 30 10:45:46 2017] ata6.00: status: { DRDY ERR }
[Tue May 30 10:45:46 2017] ata6.00: error: { UNC }
[Tue May 30 10:45:46 2017] ata6.00: configured for UDMA/133
[Tue May 30 10:45:46 2017] sd 5:0:0:0: [sdf] tag#25 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Tue May 30 10:45:46 2017] sd 5:0:0:0: [sdf] tag#25 Sense Key : Medium Error [current] [descriptor]
[Tue May 30 10:45:46 2017] sd 5:0:0:0: [sdf] tag#25 Add. Sense: Unrecovered read error - auto reallocate failed
[Tue May 30 10:45:46 2017] sd 5:0:0:0: [sdf] tag#25 CDB: Read(10) 28 00 e8 e0 87 c0 00 00 08 00
[Tue May 30 10:45:46 2017] blk_update_request: I/O error, dev sdf, sector 3907028928
[Tue May 30 10:45:46 2017] ata6: EH complete
[Tue May 30 10:45:51 2017] ata6.00: exception Emask 0x0 SAct 0x280 SErr 0x0 action 0x0
[Tue May 30 10:45:51 2017] ata6.00: irq_stat 0x40000008
[Tue May 30 10:45:51 2017] ata6.00: failed command: READ FPDMA QUEUED
[Tue May 30 10:45:51 2017] ata6.00: cmd 60/08:38:00:88:e0/00:00:e8:00:00/40 tag 7 ncq 4096 in
[Tue May 30 10:45:51 2017] ata6.00: status: { DRDY ERR }
[Tue May 30 10:45:51 2017] ata6.00: error: { UNC }
[Tue May 30 10:45:51 2017] ata6.00: configured for UDMA/133
[Tue May 30 10:45:51 2017] sd 5:0:0:0: [sdf] tag#7 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Tue May 30 10:45:51 2017] sd 5:0:0:0: [sdf] tag#7 Sense Key : Medium Error [current] [descriptor]
[Tue May 30 10:45:51 2017] sd 5:0:0:0: [sdf] tag#7 Add. Sense: Unrecovered read error - auto reallocate failed
[Tue May 30 10:45:51 2017] sd 5:0:0:0: [sdf] tag#7 CDB: Read(10) 28 00 e8 e0 88 00 00 00 08 00
[Tue May 30 10:45:51 2017] blk_update_request: I/O error, dev sdf, sector 3907028992
[Tue May 30 10:45:51 2017] Buffer I/O error on dev sdf, logical block 488378624, async page read
[Tue May 30 10:45:51 2017] ata6: EH complete
[Tue May 30 10:45:56 2017] ata6.00: exception Emask 0x0 SAct 0xd00000 SErr 0x0 action 0x0
[Tue May 30 10:45:56 2017] ata6.00: irq_stat 0x40000008
[Tue May 30 10:45:56 2017] ata6.00: failed command: READ FPDMA QUEUED
[Tue May 30 10:45:56 2017] ata6.00: cmd 60/08:a0:30:88:e0/00:00:e8:00:00/40 tag 20 ncq 4096 in
[Tue May 30 10:45:56 2017] ata6.00: status: { DRDY ERR }
[Tue May 30 10:45:56 2017] ata6.00: error: { UNC }
[Tue May 30 10:45:56 2017] ata6.00: configured for UDMA/133
[Tue May 30 10:45:56 2017] sd 5:0:0:0: [sdf] tag#20 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[Tue May 30 10:45:56 2017] sd 5:0:0:0: [sdf] tag#20 Sense Key : Medium Error [current] [descriptor]
[Tue May 30 10:45:56 2017] sd 5:0:0:0: [sdf] tag#20 Add. Sense: Unrecovered read error - auto reallocate failed
[Tue May 30 10:45:56 2017] sd 5:0:0:0: [sdf] tag#20 CDB: Read(10) 28 00 e8 e0 88 30 00 00 08 00
[Tue May 30 10:45:56 2017] blk_update_request: I/O error, dev sdf, sector 3907029040
[Tue May 30 10:45:56 2017] ata6: EH complete

Message 12 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

Ah, bugger, I only just saw your message about looking at md126 etc. 

 

Will do that now.

 

Message 13 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

md126 extract

 

[Tue May 30 10:45:19 2017] md: md126 stopped.
[Tue May 30 10:45:19 2017] md/raid:md126: device sda4 operational as raid disk 0
[Tue May 30 10:45:19 2017] md/raid:md126: device sdd4 operational as raid disk 3
[Tue May 30 10:45:19 2017] md/raid:md126: device sdc4 operational as raid disk 2
[Tue May 30 10:45:19 2017] md/raid:md126: device sdb4 operational as raid disk 1
[Tue May 30 10:45:19 2017] md/raid:md126: allocated 4362kB
[Tue May 30 10:45:19 2017] md/raid:md126: raid level 5 active with 4 out of 4 devices, algorithm 2
[Tue May 30 10:45:19 2017] md126: detected capacity change from 0 to 3000179490816
[Tue May 30 10:45:20 2017] BTRFS: device label 0ed5f04c:data devid 2 transid 1295 /dev/md126
[Tue May 30 10:45:30 2017] BTRFS info (device md126): has skinny extents
[Tue May 30 10:45:30 2017] BTRFS error (device md126): failed to read the system array: -5
[Tue May 30 10:45:30 2017] BTRFS error (device md126): open_ctree failed

Message 14 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

md125 extract

 

[Tue May 30 10:45:25 2017] md: md125 stopped.
[Tue May 30 10:45:25 2017] md/raid:md125: device sda3 operational as raid disk 0
[Tue May 30 10:45:25 2017] md/raid:md125: device sdf3 operational as raid disk 5
[Tue May 30 10:45:25 2017] md/raid:md125: device sde3 operational as raid disk 4
[Tue May 30 10:45:25 2017] md/raid:md125: device sdd3 operational as raid disk 3
[Tue May 30 10:45:25 2017] md/raid:md125: device sdc3 operational as raid disk 2
[Tue May 30 10:45:25 2017] md/raid:md125: device sdb3 operational as raid disk 1
[Tue May 30 10:45:25 2017] md/raid:md125: allocated 6474kB
[Tue May 30 10:45:25 2017] md/raid:md125: raid level 5 active with 6 out of 6 devices, algorithm 2
[Tue May 30 10:45:25 2017] md125: detected capacity change from 0 to 9977163939840
[Tue May 30 10:45:25 2017] md/raid:md125: read error corrected (8 sectors at 3897591680 on sdf3)
[Tue May 30 10:45:25 2017] md/raid:md125: read error corrected (8 sectors at 3897591792 on sdf3)
[Tue May 30 10:45:25 2017] md/raid:md125: read error corrected (8 sectors at 3897591800 on sdf3)
[Tue May 30 10:45:25 2017] md/raid:md125: read error corrected (8 sectors at 3897591744 on sdf3)
[Tue May 30 10:45:36 2017] BTRFS: device label 0ed5f04c:data devid 1 transid 1295 /dev/md125
[Tue May 30 10:46:06 2017] md/raid:md125: read error NOT corrected!! (sector 3897591280 on sdf3).
[Tue May 30 10:46:32 2017] md/raid:md125: read error NOT corrected!! (sector 3897591280 on sdf3).
[Tue May 30 10:46:53 2017] md/raid:md125: read error corrected (8 sectors at 3897591280 on sdf3)

Message 15 of 20
jak0lantash
Mentor

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

In the same log archive, look at disk_info.log and search for sdf. Check the serial number and channel (add +1), as well as error counters for this drive. Sdf is most likely dead as it's spewing I/O errors and preventing md125 from starting properly.

Confirm the slot from the GUI by comparing the serial number.

Once you confirmed which physical drive is sdf:

  1. Gracefully shut down the NAS.
  2. Unslot sdf.
  3. Boot your NAS.
Message 16 of 20
evan2
NETGEAR Expert

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

@tom72

Could do you please send logs?

How do I send all logs to ReadyNAS Community moderators?
https://kb.netgear.com/21543/How-do-I-send-all-logs-to-ReadyNAS-Community-moderators

Message 17 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

Just in case anyone is searching for this error and wants to know one (of possibly numerous) of the reasons for the error and the solution...

 

I contacted NETGEAR Support and requested help. They were very helpful, diagnosed the issue, suggested a couple of solutions, and all in all I'd recommend them.

 

Situation

 

I set up my RN316 a few days ago under the (then current) firmware 6.7.3.

X-RAID2 setup (one of the reasons I wanted this NAS was to use different sized drives).
The drives were all put in at once:
2x 2GB (Ch 5 & 6), 2x 3GB (Ch 3 & 4), 2x 4GB (Ch 1 & 2)
Although it took 26 hours to get all the disks sorted out [edit: reshape], the NAS seemed to be working fine, I only had a small amount of test data on it.

I updated the firmware last night to 6.7.4, thinking the right time to do so was
a) when there was little data on the drives, and
b) when there wasnt any data to lose, and
c) to see how seamless NetGears firmware updates were in a low risk situation, and
d) to get the RN316 onto the latest firmware.

Now I show a similar error to tom72.
- I can access the NAS local admin page at 10.0.1.xx in a browser OK.
- all six HDDs show as red in System/Volumes
- I now have a warning saying "Remove inactive volumes to use the disk. Disk #1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 1, 2"
- instead of a single summary box on the left hand side of the System/Volumes page, one that previously showed my working setup, I now have two boxes. The top one has Data = 0, Free Space = 0, Type = RAID 5. The Bottom one has Data = 0, Free Space = 0, Type = RAID Unknown.
- whenever I try and use any other tab, a warning pops up saying "No volume exists. NETGEAR Recommends that you create a volume before configuring others. Navigate to the System > Volumes page to create a volume."

Im pretty glad I had the opportunity to do a firmware update when nothing important was on the NAS.

 

I provided system logs, screenshots etc, similar to those above in this thread.

 

Diagnosis

 

From NETGEAR Support:

 

Upon checking the logs, it seems that HDD inserted on channel 5 with serial number S2HGJD2Z805484 is now getting faulty due to number of errors.

This is the only reason why you encountered this issue on you device.

You can try to remove this disk and error should disappear, your NAS is on RAID 5 configuration and it has one drive fault tolerance.

You just need to replace your faulty drive to fix the issue.

Please see list of recommended and tested drives for your NAS.

 

My next actions

 

I shut down the NAS.

I replaced the faulty 2GB HDD with a fresh 4GB HDD, formatted exactly the same as the other HDDs.
I inserted the replacement disk, and turned the power on at about 930PM.

At 800AM the following morning the NAS was still stuck at "Booting..." on the front panel of the RN316.

I could not reach the local admin page through my browser.

 

What I should have / could have done

Turns out I should have shut down, removed the faulty HDD, and rebooted without any drive in the slot.

After booting, I should then have hot-plugged in the new drive (ie. while the NAS is powered-on, insert the replacement drive to the NAS).

The NAS should then resync automatically.

Alternatively I could have performed a factory reset, since there was no data on the NAS.

 

What I actually did to solve the issue

 

Due to time zone differences, and me being somewhat impatient to get going, I did a factory reset.

 

[As an aside, Interestingly, the RN316 wouldn't factory reset with the new drive inserted.

To get the NAS to factory reset I had to remove the new drive (so, only 5 of the 6 original HDDs were in the NAS),  and then factory reset as per the manual for the RN316.]

 

After the factory reset (with 5 of 6 drives in place) the RN316 resynced with just the 5 drives. This took about a day, which was about what I expected.

After the resync with 5 drives was complete, I inserted the new drive.

As the old (faulty) HDD was 2TB, and the new HDD was 4TB, it then took about another day to add the 6th HDD to the array, as a reshape was required.

 

I now have a working 6 HDD array, as per the screenshot below:

Screen Shot 2017-06-07 at 16.31.29 .png

 

Cheers to those who gave their time and assistance in the thread above. Hopefully this is of some help to later searchers for this error, although please note there may well be plenty of other reasons for the error, so one size may not fit all!

 

Model: RN31600|ReadyNAS 300 Series 6- Bay
Message 18 of 20
RandomNASChap
Aspirant

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

I angered the NAS gods... less than an hour after my last post, we had an (unexpected) power interruption to the house circuit that the NAS is connected to.

 

The NAS doesn't yet have an UPS connected, so it did not power down gracefully... so I now have a 35 hour resync underway. Yay.

 

Screen Shot 2017-06-07 at 17.11.43 .png

Model: RN31600|ReadyNAS 300 Series 6- Bay
Message 19 of 20
GusT
Tutor

Re: ReadyNAS - Remove inactive volumes to use the disk. Disk #1,2.

Did any of you having data on your NAS manage to solve the problem?

 

What happened to me:

  1. Had 3 1TB HDDs in my NAS for years
  2. On 11 June I added a 4th 1 TB HDD to my RAID-0 volume as a parity disk to get protected
  3. Sync lasted until 14 June with degraded performance, yet everything worked
  4. According to logs, at the moment sync finished, a drive was reported failed
  5. I receive the "Remove inactive volumes to use the disk #1, 2, 3, 4" message and the same screens posted at the beginning of this thread

It is clear that all my disks could not have been failed. It is also clear that the chance is incredibly low that a disk fails at the very moment a few-days-long op finishes, after working properly for years.

 

Actually, Disk #1 was shown with a red led on the graphics. I gracefully shut my NAS down, ejected and reinserted the same disk, and booted my NAS up. Now all four leds are green, but all disks appear in red. If I hover over the disks, all of those are reported ONLINE, no any indication of any failures.

 

Firmware has been upgraded before sync to the latest.

 

For some time I could access my NAS data, but I can't any longer. Only the root folder listing the shares of my volume loads, I cannot get into any shares. Yet I can clearly hear the NAS is performing I/O on the disk.

 

Summary, I can't believe actual hardware failure occured, and even if it occured, I'm clueless which disk to replace. And I don't want another sync to run for 3 more days for the same result...

 

Thanks for any help,

Gus

Model: RN10400|ReadyNAS 100 Series 4-Bay (Diskless)
Message 20 of 20
Top Contributors
Discussion stats
  • 19 replies
  • 7636 views
  • 5 kudos
  • 5 in conversation
Announcements