- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
Re: Infamous Volume Read-Only Error: Can I force the volume to read-write again?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Infamous Volume Read-Only Error: Can I force the volume to read-write again?
From searches I can tell that I'm not the first one to be struck by this.
Happily, I have cloud backups for all my data and a new NAS standing next to it and ready to replace it. Moreover, as far as I can tell from looking at the shares mounted to my desktop, the data appears to be fine.
So as soon as all the data is transferred to the new NAS, I intend to do a factory reset on this one.
However, in the meantime, it would be very helpful if I could force my old NAS (running 6.10.2) back into read-write mode as I do the transfers, if for no other reason than as long as the volume is read-only all my ssh logins close immediately.
So is there a way to force my old ReadyNAS just to mount the volume read-write? I understand that this would be a bad idea in many situations, but in my case as described above, I think it should be fine.
Below are some relevant excerpts from the logs. I would have included more, but the board does not allow me to attach them, or attach a zip file of them, or even include them in the text of the message (because of the total character count limitation).
From dmsg.log:
[Sat Dec 14 16:59:20 2019] md: md127 stopped. [Sat Dec 14 16:59:20 2019] md: bind<sdb3> [Sat Dec 14 16:59:20 2019] md: bind<sdc3> [Sat Dec 14 16:59:20 2019] md: bind<sdd3> [Sat Dec 14 16:59:20 2019] md: bind<sde3> [Sat Dec 14 16:59:20 2019] md: bind<sdf3> [Sat Dec 14 16:59:20 2019] md: bind<sda3> [Sat Dec 14 16:59:20 2019] md/raid:md127: device sda3 operational as raid disk 0 [Sat Dec 14 16:59:20 2019] md/raid:md127: device sdf3 operational as raid disk 5 [Sat Dec 14 16:59:20 2019] md/raid:md127: device sde3 operational as raid disk 4 [Sat Dec 14 16:59:20 2019] md/raid:md127: device sdd3 operational as raid disk 3 [Sat Dec 14 16:59:20 2019] md/raid:md127: device sdc3 operational as raid disk 2 [Sat Dec 14 16:59:20 2019] md/raid:md127: device sdb3 operational as raid disk 1 [Sat Dec 14 16:59:20 2019] md/raid:md127: allocated 6474kB [Sat Dec 14 16:59:20 2019] md/raid:md127: raid level 5 active with 6 out of 6 devices, algorithm 2 [Sat Dec 14 16:59:20 2019] RAID conf printout: [Sat Dec 14 16:59:20 2019] --- level:5 rd:6 wd:6 [Sat Dec 14 16:59:20 2019] disk 0, o:1, dev:sda3 [Sat Dec 14 16:59:20 2019] disk 1, o:1, dev:sdb3 [Sat Dec 14 16:59:20 2019] disk 2, o:1, dev:sdc3 [Sat Dec 14 16:59:20 2019] disk 3, o:1, dev:sdd3 [Sat Dec 14 16:59:20 2019] disk 4, o:1, dev:sde3 [Sat Dec 14 16:59:20 2019] disk 5, o:1, dev:sdf3 [Sat Dec 14 16:59:20 2019] md127: detected capacity change from 0 to 9977153454080 [Sat Dec 14 16:59:21 2019] Adding 2093052k swap on /dev/md1. Priority:-1 extents:1 across:2093052k [Sat Dec 14 16:59:21 2019] BTRFS: device label 33ea90f5:data devid 1 transid 1797843 /dev/md127 [Sat Dec 14 16:59:21 2019] md: md126 stopped. [Sat Dec 14 16:59:21 2019] md: bind<sdc4> [Sat Dec 14 16:59:21 2019] md: bind<sdb4> [Sat Dec 14 16:59:21 2019] md: bind<sdd4> [Sat Dec 14 16:59:21 2019] md: bind<sde4> [Sat Dec 14 16:59:21 2019] md: bind<sdf4> [Sat Dec 14 16:59:21 2019] md: bind<sda4> [Sat Dec 14 16:59:21 2019] md/raid:md126: device sda4 operational as raid disk 0 [Sat Dec 14 16:59:21 2019] md/raid:md126: device sdf4 operational as raid disk 5 [Sat Dec 14 16:59:21 2019] md/raid:md126: device sde4 operational as raid disk 4 [Sat Dec 14 16:59:21 2019] md/raid:md126: device sdd4 operational as raid disk 3 [Sat Dec 14 16:59:21 2019] md/raid:md126: device sdb4 operational as raid disk 2 [Sat Dec 14 16:59:21 2019] md/raid:md126: device sdc4 operational as raid disk 1 [Sat Dec 14 16:59:21 2019] md/raid:md126: allocated 6474kB [Sat Dec 14 16:59:21 2019] md/raid:md126: raid level 5 active with 6 out of 6 devices, algorithm 2 [Sat Dec 14 16:59:21 2019] RAID conf printout: [Sat Dec 14 16:59:21 2019] --- level:5 rd:6 wd:6 [Sat Dec 14 16:59:21 2019] disk 0, o:1, dev:sda4 [Sat Dec 14 16:59:21 2019] disk 1, o:1, dev:sdc4 [Sat Dec 14 16:59:21 2019] disk 2, o:1, dev:sdb4 [Sat Dec 14 16:59:21 2019] disk 3, o:1, dev:sdd4 [Sat Dec 14 16:59:21 2019] disk 4, o:1, dev:sde4 [Sat Dec 14 16:59:21 2019] disk 5, o:1, dev:sdf4 [Sat Dec 14 16:59:21 2019] md126: detected capacity change from 0 to 10001279549440 [Sat Dec 14 16:59:21 2019] BTRFS: device label 33ea90f5:data devid 2 transid 1797843 /dev/md126 [Sat Dec 14 16:59:22 2019] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [Sat Dec 14 16:59:46 2019] BTRFS info (device md126): checking UUID tree [Sat Dec 14 16:59:47 2019] eth1: network connection down [Sat Dec 14 16:59:47 2019] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready [Sat Dec 14 16:59:50 2019] eth1: network connection up using port A [Sat Dec 14 16:59:50 2019] interrupt src: MSI [Sat Dec 14 16:59:50 2019] speed: 1000 [Sat Dec 14 16:59:50 2019] autonegotiation: yes [Sat Dec 14 16:59:50 2019] duplex mode: full [Sat Dec 14 16:59:50 2019] flowctrl: symmetric [Sat Dec 14 16:59:50 2019] role: slave [Sat Dec 14 16:59:50 2019] tcp offload: enabled [Sat Dec 14 16:59:50 2019] scatter-gather: enabled [Sat Dec 14 16:59:50 2019] tx-checksum: enabled [Sat Dec 14 16:59:50 2019] rx-checksum: enabled [Sat Dec 14 16:59:50 2019] rx-polling: enabled [Sat Dec 14 16:59:50 2019] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready [Sat Dec 14 17:00:20 2019] BTRFS error (device md126): parent transid verify failed on 7785210937344 wanted 491206 found 1797843 [Sat Dec 14 17:00:20 2019] BTRFS error (device md126): parent transid verify failed on 7785210937344 wanted 491206 found 1797843 [Sat Dec 14 17:00:20 2019] BTRFS warning (device md126): Skipping commit of aborted transaction. [Sat Dec 14 17:00:20 2019] BTRFS: error (device md126) in cleanup_transaction:1864: errno=-5 IO failure [Sat Dec 14 17:00:20 2019] BTRFS info (device md126): forced readonly [Sat Dec 14 17:00:20 2019] BTRFS: error (device md126) in btrfs_drop_snapshot:9412: errno=-5 IO failure [Sat Dec 14 17:00:20 2019] BTRFS info (device md126): delayed_refs has NO entry
From auth.log:
Dec 14 17:10:36 STORION sshd[5381]: pam_unix(sshd:session): session opened for user CarlEdman by (uid=0) Dec 14 17:10:36 STORION sshd[5381]: pam_unix(sshd:session): session closed for user CarlEdman Dec 14 17:10:54 STORION sshd[5395]: pam_unix(sshd:session): session opened for user CarlEdman by (uid=0) Dec 14 17:11:03 STORION sshd[5395]: pam_unix(sshd:session): session closed for user CarlEdman Dec 14 17:11:24 STORION sshd[5444]: pam_unix(sshd:session): session opened for user admin by (uid=0) Dec 14 17:11:30 STORION sshd[5444]: pam_unix(sshd:session): session closed for user admin Dec 14 17:12:21 STORION sshd[5500]: pam_unix(sshd:session): session opened for user admin by (uid=0) Dec 14 17:12:26 STORION sshd[5500]: pam_unix(sshd:session): session closed for user admin Dec 14 17:17:01 STORION CRON[6302]: pam_unix(cron:session): session opened for user root by (uid=0) Dec 14 17:17:01 STORION CRON[6302]: pam_unix(cron:session): session closed for user root
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Infamous Volume Read-Only Error: Can I force the volume to read-write again?
@CarlEdman wrote:
...as long as the volume is read-only all my ssh logins close immediately.
From auth.log:
Dec 14 17:10:36 STORION sshd[5381]: pam_unix(sshd:session): session opened for user CarlEdman by (uid=0) Dec 14 17:10:36 STORION sshd[5381]: pam_unix(sshd:session): session closed for user CarlEdman Dec 14 17:10:54 STORION sshd[5395]: pam_unix(sshd:session): session opened for user CarlEdman by (uid=0) Dec 14 17:11:03 STORION sshd[5395]: pam_unix(sshd:session): session closed for user CarlEdman Dec 14 17:11:24 STORION sshd[5444]: pam_unix(sshd:session): session opened for user admin by (uid=0) Dec 14 17:11:30 STORION sshd[5444]: pam_unix(sshd:session): session closed for user admin Dec 14 17:12:21 STORION sshd[5500]: pam_unix(sshd:session): session opened for user admin by (uid=0) Dec 14 17:12:26 STORION sshd[5500]: pam_unix(sshd:session): session closed for user admin
I don't think this is usual behavior when the volume is read-only.
- Did you modify the ssh invocation in some way (to run a command)?
- Does this also happen when you log in as root (which is the normal way)? Not sure if the cron failure involving root has the same cause.
I don't know of any way to remount the volume as read-write, other than to use ssh.
FWIW, this appears to be the problem that put the volume in read-only mode.
[Sat Dec 14 17:00:20 2019] BTRFS error (device md126): parent transid verify failed on 7785210937344 wanted 491206 found 1797843 [Sat Dec 14 17:00:20 2019] BTRFS error (device md126): parent transid verify failed on 7785210937344 wanted 491206 found 1797843
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Infamous Volume Read-Only Error: Can I force the volume to read-write again?
When one of my volumes went bad, it didn't stop SSH access. So, I wonder if there isn't also some issue with your OS partition. Nothing I did would make the volume read/write except for a couple minutes, and I had SSH access..
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Infamous Volume Read-Only Error: Can I force the volume to read-write again?
Hmm. The failure of SSH and the switch to read-only status happened at the same time and without me changing anything in the SSH configuration, so I assumed they were related. But indeed it seems that I can log myself in via ssh to some accounts, like admin, but not others, like my own. That ought to be enough to get me going until I factory reset the device, but does anybody know what could possibly have caused that if it was not the read-only status of the main volume?