× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

RN 104 "Remove inactive volumes to use disk" despite passed disk test???

Tartarus116
Aspirant

RN 104 "Remove inactive volumes to use disk" despite passed disk test???

Hello everybody,

 

Three days ago my ReadyNAS RN104 started malfunctioning and displayed the error message "Remove inactive volumes to use disk. Disk # 1,2,3,4." But the logs didn't show any drive malfunction, so I performed another disk-test, which took a little over 10h. 

 

However, all drives (in RAID5 configuration) passed the test and showed no signs of being corrupted:

 

Disk sdd:
	HostID: 2fe5356c
	Flags: 0x0
	Size: 7814037168 (3726 GB)
	Free: 4062
	Controller 0
	Channel: 0
	Model: WDC WD40EFRX-68WT0N0
	Serial: WD-WCC4E3TV57AA
	Firmware: 82.00A82
	Class: SATA (2)
	RPM: 5400
	SMART data 
		Reallocated Sectors:            0
		Reallocation Events:            0
		Spin Retry Count:               0
		Current Pending Sector Count:   0
		Uncorrectable Sector Count:     0
		Temperature:                    43
		Start/Stop Count:               5469
		Power-On Hours:                 22655
		Power Cycle Count:              1157
		Load Cycle Count:               5421
		Latest Self Test:               Passed

Disk sdc:
	HostID: 2fe5356c
	Flags: 0x0
	Size: 7814037168 (3726 GB)
	Free: 4062
	Controller 0
	Channel: 1
	Model: WDC WD40EFRX-68WT0N0
	Serial: WD-WCC4E4ERCDPJ
	Firmware: 82.00A82
	Class: SATA (2)
	RPM: 5400
	SMART data 
		Reallocated Sectors:            0
		Reallocation Events:            0
		Spin Retry Count:               0
		Current Pending Sector Count:   0
		Uncorrectable Sector Count:     0
		Temperature:                    49
		Start/Stop Count:               5774
		Power-On Hours:                 22657
		Power Cycle Count:              1157
		Load Cycle Count:               5728
		Latest Self Test:               Passed

Disk sdb:
	HostID: 2fe5356c
	Flags: 0x0
	Size: 7814037168 (3726 GB)
	Free: 4062
	Controller 0
	Channel: 2
	Model: WDC WD40EFRX-68WT0N0
	Serial: WD-WCC4ECNZLH1X
	Firmware: 82.00A82
	Class: SATA (2)
	RPM: 5400
	SMART data 
		Reallocated Sectors:            0
		Reallocation Events:            0
		Spin Retry Count:               0
		Current Pending Sector Count:   0
		Uncorrectable Sector Count:     0
		Temperature:                    47
		Start/Stop Count:               5980
		Power-On Hours:                 22659
		Power Cycle Count:              1142
		Load Cycle Count:               5948
		Latest Self Test:               Passed

Disk sda:
	HostID: 2fe5356c
	Flags: 0x0
	Size: 7814037168 (3726 GB)
	Free: 4062
	Controller 0
	Channel: 3
	Model: WDC WD40EFRX-68WT0N0
	Serial: WD-WCC4E0ZY6VNN
	Firmware: 82.00A82
	Class: SATA (2)
	RPM: 5400
	SMART data 
		Reallocated Sectors:            0
		Reallocation Events:            0
		Spin Retry Count:               0
		Current Pending Sector Count:   0
		Uncorrectable Sector Count:     0
		Temperature:                    43
		Start/Stop Count:               6866
		Power-On Hours:                 22661
		Power Cycle Count:              1142
		Load Cycle Count:               6834
		Latest Self Test:               Passed

Because the results showed me that the disks are not at fault, I performed an OS-reset.

 

But the issue persisted and now the interface is even more sluggish than before and any attempt to change the default password results in an error. Booting in read-only mode also didn't help...

 

I've been performing a disk-balance and defrag job weekly, while only half of my storage is used up, so what's going on here?? Does the device have an internal flash storage that could be corrupted or something?

 

Firmware version: 6.9.3

OS-Update history:

[2015/01/22 13:47:27] Factory default initiated due to new disks (no RAID, no partitions)!
[2015/01/22 13:48:09] Defaulting to X-RAID2 mode, RAID level 5
[2015/01/22 13:48:41] Factory default initiated on ReadyNASOS 6.2.0 (ReadyNASOS).
[2015/01/24 10:12:13] Updated from ReadyNASOS 6.2.0 (ReadyNASOS) to 6.2.2 (ReadyNASOS).
[2015/01/29 07:06:42] Updated from ReadyNASOS 6.2.2 (ReadyNASOS) to 6.2.2 (ReadyNASOS).
[2015/05/11 14:42:40] Updated from ReadyNASOS 6.2.2 (ReadyNASOS) to 6.2.4 (ReadyNASOS).
[2015/08/03 07:20:42] Updated from ReadyNASOS 6.2.4 (ReadyNASOS) to 6.2.5 (ReadyNASOS).
[1969/12/31 17:00:57] Updated from ReadyNASOS 6.2.5 (ReadyNASOS) to 6.4.0 (ReadyNASOS).
[2016/01/03 07:02:14] Updated from ReadyNASOS 6.4.0 (ReadyNASOS) to 6.4.1 (ReadyNASOS).
[2016/02/28 15:06:49] Updated from ReadyNASOS 6.4.1 (ReadyNASOS) to 6.4.2 (ReadyNASOS).
[2016/05/28 15:53:42] Updated from ReadyNASOS 6.4.2 (ReadyNASOS) to 6.5.0 (ReadyNASOS).
[2016/06/30 15:04:37] Updated from ReadyNASOS 6.5.0 (ReadyNASOS) to 6.5.1 (ReadyNASOS).
[2017/01/19 07:41:45] Updated from ReadyNASOS 6.5.1 (ReadyNASOS) to 6.6.1 (ReadyNASOS).
[2017/05/27 15:58:02 UTC] Updated from ReadyNASOS 6.6.1 (ReadyNASOS) to 6.7.4 (ReadyNASOS).
[2017/06/28 08:50:52 UTC] Updated from ReadyNASOS 6.7.4 (ReadyNASOS) to 6.7.5 (ReadyNASOS).
[2017/08/31 15:14:18 UTC] Updated from ReadyNASOS 6.7.5 (ReadyNASOS) to 6.8.0 (ReadyNASOS).
[2018/03/08 19:01:09 UTC] Updated from ReadyNASOS 6.8.0 (ReadyNASOS) to 6.9.2 (ReadyNASOS).
[2018/04/08 12:20:13 UTC] Updated from ReadyNASOS 6.9.2 (ReadyNASOS) to 6.9.3 (ReadyNASOS).

The kernel log shows a lot of critical BTRFS errors on the day the NAS started malfunctioning:

May 31 13:55:34 Banana_Stand kernel: WARNING: CPU: 0 PID: 624 at fs/btrfs/disk-io.c:541 btree_csum_one_bio+0x10c/0x118()
May 31 13:55:34 Banana_Stand kernel: Modules linked in: vpd(PO)
May 31 13:55:34 Banana_Stand kernel: CPU: 0 PID: 624 Comm: kworker/u2:4 Tainted: P        W  O    4.4.116.armada.1 #1
May 31 13:55:34 Banana_Stand kernel: Hardware name: Marvell Armada 370/XP (Device Tree)
May 31 13:55:34 Banana_Stand kernel: Workqueue: btrfs-worker btrfs_worker_helper
May 31 13:55:34 Banana_Stand kernel: [<c0015f44>] (unwind_backtrace) from [<c00120fc>] (show_stack+0x10/0x18)
May 31 13:55:34 Banana_Stand kernel: [<c00120fc>] (show_stack) from [<c03a6080>] (dump_stack+0x78/0x9c)
May 31 13:55:34 Banana_Stand kernel: [<c03a6080>] (dump_stack) from [<c00249c8>] (warn_slowpath_common+0x74/0xac)
May 31 13:55:34 Banana_Stand kernel: [<c00249c8>] (warn_slowpath_common) from [<c0024a1c>] (warn_slowpath_null+0x1c/0x28)
May 31 13:55:34 Banana_Stand kernel: [<c0024a1c>] (warn_slowpath_null) from [<c02aab64>] (btree_csum_one_bio+0x10c/0x118)
May 31 13:55:34 Banana_Stand kernel: [<c02aab64>] (btree_csum_one_bio) from [<c02a93e4>] (run_one_async_start+0x34/0x48)
May 31 13:55:34 Banana_Stand kernel: [<c02a93e4>] (run_one_async_start) from [<c02eaed0>] (normal_work_helper+0x84/0x1a4)
May 31 13:55:34 Banana_Stand kernel: [<c02eaed0>] (normal_work_helper) from [<c0038148>] (process_one_work+0x11c/0x334)
May 31 13:55:34 Banana_Stand kernel: [<c0038148>] (process_one_work) from [<c00383c8>] (worker_thread+0x30/0x49c)
May 31 13:55:34 Banana_Stand kernel: [<c00383c8>] (worker_thread) from [<c003d4b0>] (kthread+0x104/0x124)
May 31 13:55:34 Banana_Stand kernel: [<c003d4b0>] (kthread) from [<c000f580>] (ret_from_fork+0x14/0x34)
May 31 13:55:34 Banana_Stand kernel: ---[ end trace a1b4f282bb2b4892 ]---
May 31 13:56:08 Banana_Stand kernel: BTRFS critical (device md127): unable to find logical 36128555008 len 4096
May 31 13:56:08 Banana_Stand kernel: BTRFS critical (device md127): unable to find logical 36128555008 len 4096
May 31 13:56:09 Banana_Stand kernel: BTRFS critical (device md127): unable to find logical 36128555008 len 4096
May 31 13:56:09 Banana_Stand kernel: BTRFS critical (device md127): unable to find logical 36128555008 len 4096
May 31 13:56:09 Banana_Stand kernel: BTRFS critical (device md127): unable to find logical 36128555008 len 4096
May 31 13:56:10 Banana_Stand kernel: BTRFS critical (device md127): unable to find logical 36128555008 len 4096

I also performed a memory test but it got stuck at the first second and didn't progress even after 10 hours. Please let me know if any of you have experienced something similar. Any help is appreciated!

 

Kernel log is attached.

 

Model: RN104|ReadyNAS 100 Series
Message 1 of 6
Marc_V
NETGEAR Employee Retired

Re: RN 104 "Remove inactive volumes to use disk" despite passed disk test???

Hi @Tartarus116

 

Welcome to the Community!

 

The BTRFS errors showing that is was not able to mount/read tree root. I would suggest contacting NETGEAR Support so they can assist you on recovering the volume or data.

 

You can purchase a Pay-Per-Incident contract if you don't have any support available though if your NAS needs Data recovery this service has a different charge. Please see Data Recovery

 

Regards

 

 

Model: RN104|ReadyNAS 100 Series
Message 2 of 6
Tartarus116
Aspirant

Re: RN 104 "Remove inactive volumes to use disk" despite passed disk test???

Support refused to accept my prepaid credit card, so I ended up solving the problem myself for free.

 

What I did:

 

1) Removed drives from NAS & connected them to PC

2) Tried almost every data-recovery software out there, until I found one that supports the BTRFS file system: ReclaiMe
3) ReclaiMe auto-detected the RAID volume and its parameters and automatically re-built the RAID.

4) Transferred all of the data onto single drive.

 

I guess the most likely cause of the original issue was the broken memory in the NAS.
Seriously disappointed by Netgear Hardware lifetime and even more disappointed by **bleep**ty support.

Hope this helps people who have also experience this issue.

Model: ReadyNAS Remote|
Message 3 of 6
StephenB
Guru

Re: RN 104 "Remove inactive volumes to use disk" despite passed disk test???

Why are you thinking the issue was caused by faulty memory in the ReadyNAS?

Message 4 of 6
Tartarus116
Aspirant

Re: RN 104 "Remove inactive volumes to use disk" despite passed disk test???

I mentioned at the end of my original post that the NAS had failed the memory test, and left me unable to change configurations, such as my password. The disks are absolutely fine, and even though I got a lot of BTRFS errors in the kernel log, I was able to completely recover my data with the ReclaiMe software.

I might look into replacing the memory and see if that works. But for now, I'm just happy I got my data back.

Message 5 of 6
Marc_V
NETGEAR Employee Retired

Re: RN 104 "Remove inactive volumes to use disk" despite passed disk test???

Hi @Tartarus116

 

Good to hear you were able to get it back,  there might be issiues with the system Support is using renderng it impossible to use your credit card.

 

ReclaiMe is what Community users often recommend when it comes to Data recovery so that's good you were able to discover it and pull out your data.

 

Unfortunately RN104 memory is soldered so memory cannot be replaced. If your RN104 is less than 3 years then you may want to contact Support back to RMA the device.

 

 

Regards

 

 

 

 

Message 6 of 6
Top Contributors
Discussion stats
  • 5 replies
  • 2036 views
  • 0 kudos
  • 3 in conversation
Announcements