NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
dbrami
Apr 27, 2023Aspirant
Can't go back to RW mode after system "protected" itself
Hi all, I've been struggling with this for a few weeks and can't use my NAS since this started. I was transferring some files to the NAS as a mounted "network drive" from my Macbook. I cancelled th...
- Apr 28, 2023
I am seeing two things.
One is that sde and sdf (disks 5 and 6) are showing some errors in disk_info.log
Device: sde Health Data: Current Pending Sector Count: 3 Uncorrectable Sector Count: 3 Device: sdf Health Data: Current Pending Sector Count: 5 Uncorrectable Sector Count: 4
Raid-5 can't handle two disk failures, so that might be part of the puzzle here. Though these counts are small, it might be good to power down, and connect these disks to a PC (sata or USB dock). Test them with WD's dashboard software - running the long (full) non-destructive test. Label the disks as you remove them, so you can put them back in the right slot. Keep the NAS powered down until you replace the disks.
The other error (which is what is driving the read-only status is here:
Apr 08 10:50:54 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 1, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:54 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 2, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 3, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 4, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 5, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 6, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 7, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 8, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 9, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 10, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS: error (device md127) in btrfs_commit_transaction:2249: errno=-5 IO failure (Error while writing out transaction) Apr 08 10:50:58 BananaS kernel: BTRFS info (device md127): forced readonly
One thing that is odd is that I am not seeing any corresponding mdadm or raw disk errors in kernel.log - which normally we would see.
Not sure yet on what you might do with ssh, but I strongly recommend making a backup of any data you care about before attempting a repair. If it were my system, I'd start over with a fresh volume (factory default), because the repair might not fix everything.
StephenB
Apr 27, 2023Guru - Experienced User
dbrami wrote:
To "fix" any issues, i pulled out a drive and put it back in, knowing some file scanning happens. The system status shows "healthy" but i still can't write any data.
Very bad idea actually. You are lucky you didn't lose all your data.
dbrami wrote:
My research has led me to believe i need to unmount my subvolumes and remount as rw.
No. The mitigation needed depends on exactly what the file system error is - but unmounting/remounting the subvolume is never the answer.
The error you posted is saying that you couldn't roll back a snapshot. That of course will fail if the volume is already read-only, because the roll back requires write access to the volume. So we don't yet have the info needed to tell what is wrong. Possibly it is in the full log zip file, so I suggest downloading that right away. But that is also problematic, since the error happened on 8 April or before, so the needed detials might no longer be in the logs.
If you don't have a full backup of your data, then the first step is to do that. It is definitely at risk. If you don't have enough storage, then purchase external drives, or whatever you need. Back up what critical (irreplacable) data you can while waiting for the storage to arrive.
After you've ensured data safety, you have two basic options. One is to try to repair what is wrong. The other is to do a factory default, reconfigure the NAS, and restore data from backup. Although the second option can be a lot of work, in practice it is often quicker than trying to repair the problem (and success is more certain).
dbrami wrote:
Rolling back to snapshots doesn't work and Tech Support won't respond to me.
As I said above, rolling back to snapshots requires writing to the volume. The NAS isn't allowing that. The rationale is that any any writes to the volume is llikely to result in data loss.
Unfortunately many folks here are finding Netgear support to be non-responsive. My own opinion is that Netgear has been quiet-quiting their storage business for some years now, and that they no longer have trained support staff. Netgear hasn't announced anything - but the facts are that new ReadyNAS can't be found for purchase, and Netgear hasn't introduced a new ReadyNAS product since 2017.
Sandshark
Apr 27, 2023Sensei - Experienced User
The volume will go into read-only mode when it becomes damaged in a way that additional writes will likely cause additional problems. But it's giving you time to back up data before something worse does happen. Frankly, without some previous Linux knowledge or remote help, you are unlikely to fix it, at least in a permanent manner. So backing up and doing a factory default is, IMHO, your best solution. It's the one I took when faced with this issue. But, I did have an up-to-date backup already.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!