× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Remove inactive volumes to use the disk. Disk 1,2,3,4,5,6,7,8,9,10,11,12 on a RN4200V2

avegelien
Aspirant

Remove inactive volumes to use the disk. Disk 1,2,3,4,5,6,7,8,9,10,11,12 on a RN4200V2

As a satisfied ReadyNas user of a RN6000 and 2 of the 316 series for years now, I purchased a RN4200 V2 one month ago, in pristine condition, including 6 disks which didn't have any errors. Upgraded to 6.9.1, added 6 disks (zero errors) myself and had a massive performance for a few weeks until yesterday. The 4200 was still powered on but did not respond. Had to power off the hard way. After boot a new volume was created, data0-0 which is empty and the original data0 is there but RAID unknown.

Can anybody help me and guide me to how to mount the volume myself so I can access the volume again? I don't have a backup of all data because I was in the middle of re-arranging the content of my other ReadyNasses. I know it's possible, had support with my both new 316's a year ago when the same issue occurred that was solved remotely with some commandline 'magic'  For the 4200 I don't have support so if someone can tell me what the ' magic spells' are, please do!

Model: ReadyNAS RN12G1230v2|ReadyNAS 4200v2
Message 1 of 8
slauterman
Aspirant

Re: Remove inactive volumes to use the disk. Disk 1,2,3,4,5,6,7,8,9,10,11,12 on a RN4200V2

I've encountered a very similar situation. Running a ReadyNAS 314 for some time and recently updated the firmware to 6.9, and then again to 6.9.1 in the last week. Had trouble accessing the drives from MacOS X El Capitan (10.11.6) - at the time I could connect via SMB but unable to write any data. Then I logged into the admin interface and restarted the ReadyNAS. Now it reports "No volume exists. NETGEAR recommends that you create a volume before configuring other settings. Navigate to the Systems -> Volumes page to create a volume." Looking at the Volumes page it says I should "Remove inactive volumes to use the disk. Disk #1,2,3,4." All my disks are shown in RED, labeled as Spare, volume state is NEW, and when I select them my choice is to Format. They were part of a RAID 5 data volume.

 

If I remove the inactive volumes do I lose my data? What are my options? Right now it seems everything is lost.

Model: RN31400|ReadyNAS 300 Series 4-Bay
Message 2 of 8
avegelien
Aspirant

Re: Remove inactive volumes to use the disk. Disk 1,2,3,4,5,6,7,8,9,10,11,12 on a RN4200V2

So we have the same situation, all my disks are red too. Seems like OS 6 is somewhat buggy. As I said, had the same thing on both my 316's and now again on another RaedyNas. I hope someone here can help us. Don't you still have support for your 314?

Message 3 of 8
mdgm-ntgr
NETGEAR Employee Retired

Re: Remove inactive volumes to use the disk. Disk 1,2,3,4,5,6,7,8,9,10,11,12 on a RN4200V2

If you look at smart_history.log have there been any changes recently?

 

Any errors in the diagnostics log from RAIDar?

Message 4 of 8
avegelien
Aspirant

Re: Remove inactive volumes to use the disk. Disk 1,2,3,4,5,6,7,8,9,10,11,12 on a RN4200V2

Nothing out of the ordinary in smart_history.log

 

Message 5 of 8

Re: Remove inactive volumes to use the disk. Disk 1,2,3,4,5,6,7,8,9,10,11,12 on a RN4200V2

I have a very similar situation to @avegelien and @slauterman. I recently updated to firmware 6.9.1 on an RN214. Like @slauterman I was able to read from SMB shares on a RAID 5 volume but not write to them. After a reboot, I now have the "No volume exists" error. In my case, all three disks in my RAID 5 pass short tests using smartctl. All of the SMART data in the logs and at the SSH console looks good. I saw these relevant lines in dmesg and the critical errors were picked up by RAIDar diagnostics:

 

[   22.120698] md: md127 stopped.

[   22.121803] md: bind<sdb3>

[   22.122033] md: bind<sdc3>

[   22.122267] md: bind<sda3>

[   22.124225] md/raid:md127: device sda3 operational as raid disk 0

[   22.124233] md/raid:md127: device sdc3 operational as raid disk 2

[   22.124239] md/raid:md127: device sdb3 operational as raid disk 1

[   22.125161] md/raid:md127: allocated 3240kB

[   22.125221] md/raid:md127: raid level 5 active with 3 out of 3 devices, algorithm 2

[   22.125226] RAID conf printout:

[   22.125232] --- level:5 rd:3 wd:3

[   22.125238] disk 0, o:1, dev:sda3

[   22.125243] disk 1, o:1, dev:sdb3

[   22.125248] disk 2, o:1, dev:sdc3

[   22.125403] md127: detected capacity change from 0 to 7991637573632

[   22.485486] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1 across:523708k

[   22.506940] BTRFS: device label 119c1b84:data devid 1 transid 789057 /dev/md127

[   22.669799] BTRFS info (device md127): enabling auto recovery

[   22.672132] BTRFS critical (device md127): unable to find logical 68648173568 len 4096

[   22.672144] BTRFS critical (device md127): unable to find logical 68648173568 len 4096

[   22.672192] BTRFS critical (device md127): unable to find logical 68648173568 len 4096

[   22.672203] BTRFS critical (device md127): unable to find logical 68648173568 len 4096

[   22.672243] BTRFS critical (device md127): unable to find logical 68648173568 len 4096

[   22.672251] BTRFS critical (device md127): unable to find logical 68648173568 len 4096

[   22.672265] BTRFS error (device md127): failed to read chunk root

[   22.734127] BTRFS error (device md127): open_ctree failed

 

RAIDar diagnostics also picked up this:

2017-12-15 00:03:01: BTRFS: error (device md127) in cleanup_transaction:1856: errno=-5 IO failure

2017-12-15 00:03:00: BTRFS: error (device md127) in btrfs_commit_transaction:2233: errno=-5 IO failure (Error while writing out transaction)

 

Sometime after midnight on 2017-12-15 is when I noticed that I was unable to write to SMB or AFP shares.

 

I made a post here today but I'm not sure if it has gone through yet:

https://community.netgear.com/t5/Using-your-ReadyNAS/RN214-Firmware-6-9-1-No-Volumes-Exist/m-p/14582...

Model: RN214|4 BAY Desktop ReadyNAS Storage
Message 6 of 8
mdgm-ntgr
NETGEAR Employee Retired

Re: Remove inactive volumes to use the disk. Disk 1,2,3,4,5,6,7,8,9,10,11,12 on a RN4200V2

Each of you are likely to have issues that are different to each other which does make it hard to give advice on something like this. It does sound like though that for those of you with units that shipped with OS6 at least you'd be best to contact support.

Message 7 of 8
MaxxMark
Luminary

Re: Remove inactive volumes to use the disk. Disk 1,2,3,4,5,6,7,8,9,10,11,12 on a RN4200V2

After creating a topic 2 days ago (https://community.netgear.com/t5/Using-your-ReadyNAS/OS6-stuck-at-booting-mounting-btrfs/td-p/146078...) I noticed this topic which seems to be describing the same issue as I have encountered.

 

Although I did not describe it in my post (see link above), I had the same situation. I experienced sluggish behaviour, and could not write to the NAS anymore (I could have misattributed the 'not able to write' as sluggish behaviour).

 

I tried rebooting the system, which, didnt respond to the shutdown (LCD kept showing the  'shutting down' message). After a while I also decided to do a hard reboot of the system.  Afterwards it came up with high load (see my topic), and eventually continued booting but couldnt see the volumes in the admin interface. I have not rebooted my system yet. I am on the latest version of 6.8.x, and had not yet upgraded to 6.9.x.

 

My dmesg log can be seen at http://www.maxxmark.com/dropbox/nas.dmesg.out.txt and http://www.maxxmark.com/dropbox/nas.dmesg.out.2.txt. And attached a screenshot of the webinterface.

 

It does feel like a filesystem problem, as it did mount the OS partition. Which, as I understood, is also spread over the disks used for the data. So it feels like the RAID-5 is intact, but the filesystem got corrupted or something.. 

 

 

My NAS is out of warrenty for years right now. Is there any way to receive any kind of support on this (even if there has to be paid for it). Although I have an online/offsite backup of my NAS, it takes a lot of time to restore 12TB of data, and trying to recover the partition would significantly reduce retore-time 🙂

Model: RNDP600E |ReadyNAS Pro Pioneer Edition|EOL
Message 8 of 8
Top Contributors
Discussion stats
  • 7 replies
  • 2282 views
  • 0 kudos
  • 5 in conversation
Announcements