NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
dbrami
Apr 27, 2023Aspirant
Can't go back to RW mode after system "protected" itself
Hi all,
I've been struggling with this for a few weeks and can't use my NAS since this started.
I was transferring some files to the NAS as a mounted "network drive" from my Macbook. I cancelled the transfer midway and the NAS did not appreciate that.
It went into read-only mode.
Apr 08, 2023 08:44:37 PM | Snapshot: Share or LUN BananaMovies failed to roll back from snapshot 2023_03_18__00_00_29. | |
Apr 08, 2023 10:51:06 AM | System: Alert message failed to send. | |
Apr 08, 2023 10:51:06 AM | Volume: The volume data encountered an error and was made read-only. It is recommended to backup your data. |
To "fix" any issues, i pulled out a drive and put it back in, knowing some file scanning happens. The system status shows "healthy" but i still can't write any data.
I have 5TB + of free space available out of 10TB. So space is not an issue.
My research has led me to believe i need to unmount my subvolumes and remount as rw.
Here are *my* subvolumes after runnig `mount` command as root user through ssh:
/dev/md127 on /apps type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=258,subvol=/.apps)
/dev/md127 on /home type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=257,subvol=/home)
/dev/md127 on /var/ftp/BananaMovies type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=271,subvol=/BananaMovies)
/dev/md127 on /run/nfs4/data/BananaMovies type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=271,subvol=/BananaMovies)
/dev/md127 on /run/nfs4/data/CloudSync type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=2166,subvol=/CloudSync)
/dev/md127 on /run/nfs4/data/Data_backup type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=6290,subvol=/Data_backup)
/dev/md127 on /run/nfs4/data/Photos_Videos type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=3353,subvol=/Photos_Videos)
/dev/md127 on /run/nfs4/home type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=257,subvol=/home)
Rolling back to snapshots doesn't work and Tech Support won't respond to me.
Can you help me with recovery PLEASE?
I am seeing two things.
One is that sde and sdf (disks 5 and 6) are showing some errors in disk_info.log
Device: sde Health Data: Current Pending Sector Count: 3 Uncorrectable Sector Count: 3 Device: sdf Health Data: Current Pending Sector Count: 5 Uncorrectable Sector Count: 4
Raid-5 can't handle two disk failures, so that might be part of the puzzle here. Though these counts are small, it might be good to power down, and connect these disks to a PC (sata or USB dock). Test them with WD's dashboard software - running the long (full) non-destructive test. Label the disks as you remove them, so you can put them back in the right slot. Keep the NAS powered down until you replace the disks.
The other error (which is what is driving the read-only status is here:
Apr 08 10:50:54 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 1, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:54 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 2, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 3, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 4, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 5, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 6, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 7, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 8, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 9, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS error (device md127): bdev /dev/md127 errs: wr 10, rd 0, flush 0, corrupt 0, gen 0 Apr 08 10:50:58 BananaS kernel: BTRFS: error (device md127) in btrfs_commit_transaction:2249: errno=-5 IO failure (Error while writing out transaction) Apr 08 10:50:58 BananaS kernel: BTRFS info (device md127): forced readonly
One thing that is odd is that I am not seeing any corresponding mdadm or raw disk errors in kernel.log - which normally we would see.
Not sure yet on what you might do with ssh, but I strongly recommend making a backup of any data you care about before attempting a repair. If it were my system, I'd start over with a fresh volume (factory default), because the repair might not fix everything.
10 Replies
Replies have been turned off for this discussion
- StephenBGuru - Experienced User
dbrami wrote:
To "fix" any issues, i pulled out a drive and put it back in, knowing some file scanning happens. The system status shows "healthy" but i still can't write any data.
Very bad idea actually. You are lucky you didn't lose all your data.
dbrami wrote:
My research has led me to believe i need to unmount my subvolumes and remount as rw.
No. The mitigation needed depends on exactly what the file system error is - but unmounting/remounting the subvolume is never the answer.
The error you posted is saying that you couldn't roll back a snapshot. That of course will fail if the volume is already read-only, because the roll back requires write access to the volume. So we don't yet have the info needed to tell what is wrong. Possibly it is in the full log zip file, so I suggest downloading that right away. But that is also problematic, since the error happened on 8 April or before, so the needed detials might no longer be in the logs.
If you don't have a full backup of your data, then the first step is to do that. It is definitely at risk. If you don't have enough storage, then purchase external drives, or whatever you need. Back up what critical (irreplacable) data you can while waiting for the storage to arrive.
After you've ensured data safety, you have two basic options. One is to try to repair what is wrong. The other is to do a factory default, reconfigure the NAS, and restore data from backup. Although the second option can be a lot of work, in practice it is often quicker than trying to repair the problem (and success is more certain).
dbrami wrote:
Rolling back to snapshots doesn't work and Tech Support won't respond to me.
As I said above, rolling back to snapshots requires writing to the volume. The NAS isn't allowing that. The rationale is that any any writes to the volume is llikely to result in data loss.
Unfortunately many folks here are finding Netgear support to be non-responsive. My own opinion is that Netgear has been quiet-quiting their storage business for some years now, and that they no longer have trained support staff. Netgear hasn't announced anything - but the facts are that new ReadyNAS can't be found for purchase, and Netgear hasn't introduced a new ReadyNAS product since 2017.
- SandsharkSensei - Experienced User
The volume will go into read-only mode when it becomes damaged in a way that additional writes will likely cause additional problems. But it's giving you time to back up data before something worse does happen. Frankly, without some previous Linux knowledge or remote help, you are unlikely to fix it, at least in a permanent manner. So backing up and doing a factory default is, IMHO, your best solution. It's the one I took when faced with this issue. But, I did have an up-to-date backup already.
- dbramiAspirant
Thanks for clear, categorical response!
BTW, it's a ReadyNAS 516 and I can't even find it on the product search when entering new topics.
I have no problem sharing the logs (I saved them on April 10th). This forum does not allow me to share zip files so here's URL:
<redacted>
If you think from looking at logs there's a simple command-line fix, that would be great.
If you don't, then it sounds like a need to buy 4TB of portable storage and copy my data over.
Netgear NAS == Never Again.
- StephenBGuru - Experienced User
There is privacy leakage when you post logs publicly. They should be sent using the private message (PM) facility in the forum. I downloaded yours, and then redacted your URL.
dbrami wrote:
If you don't, then it sounds like a need to buy 4TB of portable storage and copy my data over.
FWIW, you should have a backup strategy in place anyway (with any NAS). RAID isn't enough to keep your data safe. In general, if you don't have a proper backup, then you will eventually lose data - it's just a matter of when.
- dbramiAspirant
Things have gotten worse - perhaps as a result of my meddling prior to reading your solutions.
And my logs have just blown up with the following pair of warnings:
When i run `mnt` command in CLI, i don't see any of my subvolumes that I *used* to have:
/dev/md127 on /data type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=5,subvol=/)
/dev/md127 on /home type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=257,subvol=/home)
/dev/md127 on /apps type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=258,subvol=/.apps)
/dev/md127 on /run/nfs4/data/CloudSync type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=2166,subvol=/CloudSync)
/dev/md127 on /run/nfs4/data/Data_backup type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=6290,subvol=/Data_backup)
/dev/md127 on /run/nfs4/data/Photos_Videos type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=3353,subvol=/Photos_Videos)
/dev/md127 on /run/nfs4/home type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=257,subvol=/home)
/dev/md127 on /run/nfs4/data/BananaMovies type btrfs (ro,noatime,nodiratime,nospace_cache,subvolid=271,subvol=/BananaMovies)I now have a big enough external disk but can't access the data AND the NAS was not able to recognize the 5TB drive when plugged into the USB 3.0 port in the back...
SO much headaches...
Any ideas on proper command for remounting my subvolumes as btrfs so i can backup my data then factory reset?
Thanks in advance.
Daniel
- StephenBGuru - Experienced User
You might have lost the volume as a result of the reboot.
Maybe try this command first from ssh - not sure if it will help, but it should clear out the write transactions.
btrfs rescue zero-log /dev/md127
Make sure you log into ssh as root (using the NAS admin password).
Then try rebooting the NAS as read-only from the boot menu.
- dbramiAspirant
I thought i had posted a response but seems I was wrong...
I rebooted into read only mode and then performed the "btrfs rescue zero-log" command but nothing seems to have happened.
I did run a complete disk check and the report stated that my drives #5 AND 6 had errors.
Do you think if i replace them one a time, i might be able to access my data and backup to portable drive?
I did get this from the logs:
May 07, 2023 03:40:30 AM Volume: Disk test failed on disk in channel 6, model WDC_WD2000FYYZ-01UL1B1, serial WD-WCC1P0707252. May 07, 2023 03:40:29 AM Volume: Disk test failed on disk in channel 5, model WDC_WD2000FYYZ-01UL1B1, serial WD-WCC1P0708843. May 06, 2023 10:37:53 PM System: Alert message failed to send. May 06, 2023 10:37:53 PM Volume: Disk test started for volume data. May 06, 2023 09:56:55 PM System: ReadyNASOS background service started. May 06, 2023 02:04:09 PM System: Alert message failed to send. May 06, 2023 02:04:09 PM System: The system is shutting down.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!