NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
NathanWoodruff
Aug 16, 2021Aspirant
ReadyNAS 316 upgrade to 6.10.5
I just added an additional 18TB drive to my 316. It took 5 days and some when I decided to remove the 12TB to replace that drive with another 18TB drive. There were 6 drives and now there are only 5 drives.
I waited till the 12TB drive was able to be removed from the JBOD raid to update to 6.10.5. I had removed the 12 TB drive this morning and the RAID showed 10TB free and some. I had used it all day with the new 18TB drive in the 316. This evening I updated to 6.10.5 from 6.10.4 hot fix. When it rebooted, the 18TB drive shows as blank and the other 4 drives that have been in the ReadyNas for more than a year now show up as RED when you click on VOLUMES on the ADMIN page.
DISK1 still shows that there is 58TB of data on that drive but the new 18TB drive is no longer apart of DISK1 raid. I get the message "Remove Inactive Volumes to use the disk #2,#3,#5,#6" the drives that are in RED"
Please don't tell me that this update killed the RAID
29 Replies
Replies have been turned off for this discussion
- StephenBGuru - Experienced User
I'm a bit confused on what RAID mode you were in before.
Did you set up JBOD RAID groups on each disk, and concatenate them together? Or did you do something else?
What took 5 days, since with JBOD there shouldn't have been any resyncs needed???
NathanWoodruff wrote:
DISK1 still shows that there is 58TB of data on that drive but the new 18TB drive is no longer apart of DISK1 raid. I get the message "Remove Inactive Volumes to use the disk #2,#3,#5,#6" the drives that are in RED"
Please don't tell me that this update killed the RAID
The screen shot is saying 58 TB in the volume (not the drive), which is a bit weird with an 18 TB disk in JBOD. Normally the inactive volume message happens when the RAID array is out of sync or if there is some corruption in the BTRFS file system. It's not the update itself that did damage - something was likely already wrong, and any reboot of the NAS would likely have given you the same symptoms.
- NathanWoodruffAspirant
It was not an X-Raid but a Flex-Raid because way back when, when I was using a 202, 1 TB drives were expensive and I wanted to get the most storage space as possible. This JBOD drive is the same drive that was on the original 1TB x 2 drives. I moved those drives to a 424 and added more drives and removed the 1TB drives one at a time. Over the years it moved to a 316 which is the current NAS with swapping in and out ever larger and larger drives.
It has worked well until last night.
How do I fix this to get the data back?
- StephenBGuru - Experienced User
NathanWoodruff wrote:
It was not an X-Raid but a Flex-Raid because way back when, when I was using a 202, 1 TB drives were expensive and I wanted to get the most storage space as possible.
I get that, but we still don't know what RAID modes you set up with flexraid. Did you have a single volume before this happened? Or multiple volumes?
NathanWoodruff wrote:
How do I fix this to get the data back?
You basically have three options.
- do a factory default, reconfigure the NAS, and restore all the data from backup. That of course will be very painful, given the size of your disks.
- contact Netgear paid support (via my.netgear.com), and get a service contract to fix this. They should be able to do it remotely.
- Attempt to fix it yourself via the linux command line.
There are multiple causes, and it is easy to do more damage if you don't know what you are doing. So I generally won't provide targeted advice on option 3 for this particular problem, as I don't want to create more risk for your data.
However, sometimes rn_enthusiast will or others will analyze your situation and give you advice on what commands you need to fix it.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!