× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: How to get volume back to read-write mode

timhood
Star

How to get volume back to read-write mode

I have a ReadyNAS 428 with two RAID5 Flex-RAID volumes. One volume experienced a disk failure and the volume was changed to read-only. I replaced the disk, formatted it, added it to the volume, and waited for ReadyNAS to rebuild it. When complete, the volume status changed to "healthy," but the volume is still read-only. I rebooted the ReadyNAS and it's still read-only. How can I mark the volume read-write, as it should be fine? I do not want to reset the entire ReadyNAS due to the time required to rebuild and restore both volumes (8TB data) and because the other volume is perfectly fine. If absolutely necessary, I could destroy and rebuild the affected volume that is currently read-only, but surely there's an easier way?

Message 1 of 38

Accepted Solutions
Sandshark
Sensei

Re: Returning read-only volume back to read-write mode--how to?

I had that happen to me.  In my case, I knew it was because my EDA500 came unplugged during a write operation, and it was fixed.  But everything I tried to make it read/write would not "take" -- it went back to read-only.  It sounds like you already have a backup, which is good.  Just make sure it's up to date and destroy and re-create the volume.  I wouldn't trust the volume to not have a hidden remaining issue if you do anything else.  The NAS makes the volume read-only to keep you from doing something that may destroy it but give you a chance to back it up.  Take the hint.

View solution in original post

Message 4 of 38

All Replies
timhood
Star

Returning read-only volume back to read-write mode--how to?

I have a #ReadyNAS #RN428 with two RAID5 Flex-RAID volumes. One volume experienced a disk failure and the volume was changed to read-only. I replaced the disk, formatted it, added it to the volume, and waited for ReadyNAS to rebuild it. When complete, the volume status changed to "healthy," but the volume is still read-only. I rebooted the ReadyNAS and it's still read-only. How can I mark the volume read-write, as it should be fine? I do not want to reset the entire ReadyNAS due to the time required to rebuild and restore both volumes (8TB data) and because the other volume is perfectly fine. If absolutely necessary, I could destroy and rebuild the affected volume that is currently read-only, but surely there's an easier way.

Message 2 of 38
StephenB
Guru

Re: Returning read-only volume back to read-write mode--how to?


@timhood wrote:

I have a #ReadyNAS #RN428 with two RAID5 Flex-RAID volumes. One volume experienced a disk failure and the volume was changed to read-only. 


A normal disk failure won't change the volume to read-only.  Something else must also have gone wrong.

 

The safest thing to do is to destroy the volume, recreate it, and restore data from backup.

 

Maybe download the full log zip, and look for errors.  My guess is that there are some btrfs errors on the volume.

Message 3 of 38
Sandshark
Sensei

Re: Returning read-only volume back to read-write mode--how to?

I had that happen to me.  In my case, I knew it was because my EDA500 came unplugged during a write operation, and it was fixed.  But everything I tried to make it read/write would not "take" -- it went back to read-only.  It sounds like you already have a backup, which is good.  Just make sure it's up to date and destroy and re-create the volume.  I wouldn't trust the volume to not have a hidden remaining issue if you do anything else.  The NAS makes the volume read-only to keep you from doing something that may destroy it but give you a chance to back it up.  Take the hint.

Message 4 of 38
timhood
Star

Re: Returning read-only volume back to read-write mode--how to?

Thanks. I'm OK with destroying the volume, because the rebuild/recovery process is manageable in my case. I had seen a previous solution that suggested resetting the entire ReadyNAS back to factory state and starting from scratch, which would not be acceptable, as the time to recover would be far too long. I destroyed and re-created the volume and it began a resyncing process.

 

Interestingly, looking at the logs, it seems there were several issues compounded that played a role. First was a drive that was failing and may have ultimately completely failed. Second was changes to an e-mail password that caused me not to receive the alerts about the failing drive. Third was when I did a firmware upgrade, after the reboot, that was ultimately when the OS didn't like the state of the volume and put it in read-only mode. Even after replacing the drive and resyncing, it stayed in read-only mode.

Message 5 of 38
timhood
Star

Re: Returning read-only volume back to read-write mode--how to?

My concern, based on reading another "accepted solution", was that I would have to reset the entire NAS. I've taken the suggestion of destroying and re-creating the volume. Thanks for confirming that simply destroying the volume was enough in this case.

Message 6 of 38
StephenB
Guru

Re: Returning read-only volume back to read-write mode--how to?


@timhood wrote:

My concern, based on reading another "accepted solution", was that I would have to reset the entire NAS. 


I have recommended this when people have only one volume, or when the corrupted volume is the OS partition. But if only one volume is damaged, destroying and recreating it is enough.

 

If you have any apps installed, then you should figure out what volume holds the .apps folder.

Message 7 of 38
tijgert
Guide

Re: How to get volume back to read-write mode

I am adding to this discussion because I am looking for an answer to the exact same question.

 

My ReadyNas 516 accidentally filled up to the max (due to an emergency backup of another drive) and threw an error because of No Space Left. The Log reflects this to be the only case and there is NO hardware issue, which has been confirmed. SO there is NO backup needed of the system, which I already have (mirrored NAS).

I just need to be able to delete files again.

 

The system however keeps switching to ReadOnly mode when I reboot due to lack of space, and due to ReadOnly mode I cannot create more space by deleting files...

 

So, with all the hardware being just fine, how do I tell the system to let me erase files so I can create more space?

I am SSH inept, but I can follow instructions if I have to.

 

I can enter SSH via Putty and I find myself at the prompt:

admin@NAS516:~$

 

What can I do next?

Message 8 of 38
timhood
Star

Re: How to get volume back to read-write mode

I don't have much experience here, and I couldn't find anything I could do (even via SSH) to put the volume back into R/W mode, but I thought of another possibility. Do you have the ability to add another drive to the volume, even if just temporarily? That should open up free space and in theory, it should allow read/write again. 

Message 9 of 38
tijgert
Guide

Re: How to get volume back to read-write mode

Alas no, I have only 16TB drives (6x) and all bays are populated. I would have to buy an 18TB drive just to test that and even then I couldn't say if it would add to the space as it's RAID5 in X-Raid. If someone could conform that I might do that, but it'd be a real bummer if it just uses 16TB of the 18TB and keeps complaining.

 

Super annoying as it says it has 1.28GB free space.. Just how much does it need to go back to R/W??

 

Would a defrag, balance or scrub possibly free up some space?

Afraid to try without an expert telling me to do so.

 

I would try deleting files through SSH, but I can't find a way to see them. SSH noob..

Message 10 of 38
Sandshark
Sensei

Re: How to get volume back to read-write mode

@tijgert , to add space, you'll need at least two larger drives.  But now that the damage is done to the volume, more space won't fix the problem.  It will never put itself back in read/write mode and when I tried to force it via SSH (which you've already said you have no familiarity with), nothing "took".  Since you have a current backup, you just need to bite the bullet and destroy the volume, re-create it, and restore files from your backup.  Every time you re-boot, you risk losing the volume completely, and thus having to delete it or even do a full factory default.

 

If you are going to add space, the time to do it is after you delete the volume and before you re-create it.  That way, there will be only one re-sync except the short ones for the OS and swap partitions that occur when you swap a drive.  I would certainly go with something bigger than 18TB.  Assuming you do swap out only two, the new RAID group will be RAID1, so you will only see half of the additional space.  Two 18TB drives are a big expense for a 2TB increase in size.

 

A couple notes for you both:  If you are saving and restoring a configuration to get the shares and permissions the same, the new volume must have the same name as the old one.  If the volume is the primary (or only) one, it will contain home folders.  If you have two volumes, it will create new home folders on the other one and designate it as primary when you destroy the other one.  But, it doesn't move the contents.  If it is your only volume, then restoring a configuration will not create home folders   That's normally done when the user first logs in.  But there is a program you can call from SSH that will create them (one at a time).  Execute mkhomedir_helper <username> for every user (including admin) to create home shares.  Make sure you do that before you try to restore any files to home folders.  Otherwise, the restoration process will likely create a normal folder instead of a BTRFS sub-volume for the users, which will be problematic.

Message 11 of 38
tijgert
Guide

Re: How to get volume back to read-write mode

Thanks for ripping off the band aid. It seems I am f*cked then, as in having to spend a lot of time rebuilding a perfectly good machine just because it's not smart or flexible enough to let me fix a non-problem.

 

It's like my car not letting me add gas to it because I ran it dry, or me not being able to crap because I ate too much. Doesn't make sense.

Thanks though.

Message 12 of 38
timhood
Star

Re: How to get volume back to read-write mode

In my case, I had two volumes and had manually moved the home folders to the second volume (using a slightly modified version of these instructions: https://community.netgear.com/t5/Using-your-ReadyNAS-in-Business/ReadyNAS-428-Move-Home-Folders-to-d.... So, after re-creating the second volume, I needed to repeat my prior steps of creating the home folders on the second volume again.

Message 13 of 38
tijgert
Guide

Re: How to get volume back to read-write mode

My setup is as vanilla as it gets.

1 volume, Raid5, 1 account to get in other than admin, zero apps, no time machine.

This total lockup/lockdown should not have happened.

Message 14 of 38
StephenB
Guru

Re: How to get volume back to read-write mode


@tijgert wrote:

My setup is as vanilla as it gets.

1 volume, Raid5, 1 account to get in other than admin, zero apps, no time machine.

This total lockup/lockdown should not have happened.


Your issue was likely with BTRFS, and not with RAID.  The other possibility is a full OS partition.  You could download the full log zip file, and see if that helps you sort out exactly what happened.

 

BTRFS does need quite a bit of free space.  I always recommend keeping the free space at 15% or more.  This is particularly important if snapshots or bit rot protection are enabled for any share.  The NAS starts to take protective measures when it drops to 5%.  

 

While it might be possible to repair the BTRFS volume, it is difficult to be certain that there is no residual corruption of the file system.  It is safest to recreate the volume and then restore the files from backup.

 

 

Message 15 of 38
tijgert
Guide

Re: How to get volume back to read-write mode

I checked the log and it states clearly that the error happened due to no space left and that's what caused it to switch to ReadOnly mode. Nothing else. It still has over a GB of space though.

 

I understand that BTRFS needs more space, but an emergency drive save to the NAS filled it to the top, I just didn't expect it to be semi-bricked by that. If it would just let me wipe those emergency saves that I rescued to a new drive then it would instantly have 10TB of space and everything will be solved.

 

All I need is some sort of access that will let me delete a 10TB folder and Bob's my uncle.

Message 16 of 38
StephenB
Guru

Re: How to get volume back to read-write mode


@tijgert wrote:

 

All I need is some sort of access that will let me delete a 10TB folder and Bob's my uncle.


You could try re-mounting it from tech support mode.

Message 17 of 38
tijgert
Guide

Re: How to get volume back to read-write mode

I'm not too proud to grasp at straws...

So, how do I do that?

Message 18 of 38
StephenB
Guru

Re: How to get volume back to read-write mode


@tijgert wrote:

I'm not too proud to grasp at straws...

So, how do I do that?


Start by downloading the log zip file, and post partition.log and mdstat.log. That will provide the information needed to manually assemble the RAID array(s).

 

I'll give you other steps once I have that info.

 

 

 

Message 19 of 38
tijgert
Guide

Re: How to get volume back to read-write mode

Appreciate it. Here goes.

 

Mdstat.log

 

Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active raid5 sda3[0] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1]
78105144000 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]

md1 : active raid10 sda2[0] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1]
1566720 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]

md0 : active raid1 sda1[1] sdb1[2] sdc1[3] sdd1[4] sde1[5] sdf1[6]
4190208 blocks super 1.2 [6/6] [UUUUUU]

unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Wed Feb 15 19:51:42 2023
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent

Update Time : Fri Aug 2 15:06:20 2024
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0

Consistency Policy : unknown

Name : 7c6e3766:0 (local to host 7c6e3766)
UUID : bce2e184:a906144d:1961ca2d:1a43aebf
Events : 319

Number Major Minor RaidDevice State
1 8 1 0 active sync /dev/sda1
6 8 81 1 active sync /dev/sdf1
5 8 65 2 active sync /dev/sde1
4 8 49 3 active sync /dev/sdd1
3 8 33 4 active sync /dev/sdc1
2 8 17 5 active sync /dev/sdb1
/dev/md/1:
Version : 1.2
Creation Time : Wed Feb 15 19:57:40 2023
Raid Level : raid10
Array Size : 1566720 (1530.00 MiB 1604.32 MB)
Used Dev Size : 522240 (510.00 MiB 534.77 MB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent

Update Time : Fri Jul 26 00:31:30 2024
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0

Layout : near=2
Chunk Size : 512K

Consistency Policy : unknown

Name : 7c6e3766:1 (local to host 7c6e3766)
UUID : 8942c986:4e610533:bc3bf4e8:1c3a56ea
Events : 19

Number Major Minor RaidDevice State
0 8 2 0 active sync set-A /dev/sda2
1 8 18 1 active sync set-B /dev/sdb2
2 8 34 2 active sync set-A /dev/sdc2
3 8 50 3 active sync set-B /dev/sdd2
4 8 66 4 active sync set-A /dev/sde2
5 8 82 5 active sync set-B /dev/sdf2
/dev/md/NAS516-0:
Version : 1.2
Creation Time : Wed Feb 15 19:57:40 2023
Raid Level : raid5
Array Size : 78105144000 (74486.87 GiB 79979.67 GB)
Used Dev Size : 15621028800 (14897.37 GiB 15995.93 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent

Update Time : Fri Aug 2 10:22:09 2024
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Consistency Policy : unknown

Name : 7c6e3766:NAS516-0 (local to host 7c6e3766)
UUID : b3917b1f:62e0a85a:64524e32:dddac605
Events : 737

Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
5 8 83 5 active sync /dev/sdf3

 

Partitions.log

 

major minor #blocks name

8 0 15625879552 sda
8 1 4194304 sda1
8 2 524288 sda2
8 3 15621160904 sda3
8 16 15625879552 sdb
8 17 4194304 sdb1
8 18 524288 sdb2
8 19 15621160904 sdb3
8 32 15625879552 sdc
8 33 4194304 sdc1
8 34 524288 sdc2
8 35 15621160904 sdc3
8 48 15625879552 sdd
8 49 4194304 sdd1
8 50 524288 sdd2
8 51 15621160904 sdd3
8 64 15625879552 sde
8 65 4194304 sde1
8 66 524288 sde2
8 67 15621160904 sde3
8 80 15625879552 sdf
8 81 4194304 sdf1
8 82 524288 sdf2
8 83 15621160904 sdf3
9 0 4190208 md0
9 1 1566720 md1
9 127 78105144000 md127

Message 20 of 38
Sandshark
Sensei

Re: How to get volume back to read-write mode

@tijgert wrote:

If it would just let me wipe those emergency saves that I rescued to a new drive then it would instantly have 10TB of space and everything will be solved.

 


I don't know if the NAS will never put itself back in read/write mode because BTRFS doesn't.  If nothing is damaged on your BTRFS volume (and I don't think you know that to be the case), you may be able to do it yourself via SSH or in support mode, or maybe a re-boot will do it once the full status is remedied.  When this happened to me, the volume was definitely damaged, so my experience of not successfully re-mounting as read/write may not apply.  But once the volume was full, other damage may have occurred, even though it going read-only is an attempt to prevent that.

 

Here is a reddit post by somebody who solved the problem, though not on a ReadyNAS:  https://www.reddit.com/r/btrfs/comments/ipagcl/full_disk_stuck_as_read_only/ .  As you can see, it can require a lot of work via the command prompt.

 

If you can successfully mount it as read/write in tech support mode, you need to wait some time after deleting some files before you re-boot.  That's because you likely ran out of space for metadata, not normal data, and it takes BTRFS some time to recover metadata from deleted files.  You could execute top and see when that recovery process is done.  You may also want to do a balance, which will reclaim additional metadata space..

Message 21 of 38
StephenB
Guru

Re: How to get volume back to read-write mode


@Sandshark wrote:

 If nothing is damaged on your BTRFS volume (and I don't think you know that to be the case),

@tijgert: That is my concern also.  I suspect something did get damaged when the volume became full.

 

FWIW, dmesg.log likely will give a useful hint when the volume was mounted.

 

 

 

 

Message 22 of 38
tijgert
Guide

Re: How to get volume back to read-write mode

*double*

Message 23 of 38
tijgert
Guide

Re: How to get volume back to read-write mode

Weird thing happening, I posted the logs twice and twice they got removed.

Is there a system preventing logs to be posted?

 

I agree that damage *might* have occurred, but I'm expecting that damage to be in the emergency saved files that are no longer needed. Any damage would be erased with erasing those files and then scrubbing/balancing I would think as all other files never moved or were altered.

 

Regardless, it's worth a try and then maybe a bit by bit check with the mirror to confirm, even just for knowing for future events or to help others in this same mess.

Message 24 of 38
tijgert
Guide

Re: How to get volume back to read-write mode

*double*

 

The logs have been posted a few posts back.

Message 25 of 38
Top Contributors
Discussion stats
  • 37 replies
  • 9471 views
  • 7 kudos
  • 6 in conversation
Announcements