NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
tglazier
May 30, 2019Aspirant
Rn314 update to 6.10.1 made volume to read only
I upgraded to the current relase 6.10.1 and after reboot I recived the message:Volume: The volume data encountered an error and was made read-only. It is recommended to backup your data. Has anyone ...
- Jun 03, 2019
tglazier wrote:
So it looks like disk 3 is the problem, is it possible to just replace the disk and let the raid rebuilt itself? And when it's done will it become writeable once again?
Normally a failed disk won't result in a read-only volume - so I'm not sure if replacing the disk is enough. You could try booting up without disk 3, and see if volume still ends up read-only. If it doesn't, then hot-insert the replacement disk.
tglazier
May 30, 2019Aspirant
I am not so worried about the data as this box is a DR box and not our primary NAS so I can retrieve it from our off site facility and default it and rebuild it.
Thanks for the log info. I found this in the Kernel log.
May 29 10:12:17 matrix-nas-rmt kernel: BTRFS error (device md127): parent transid verify failed on 304545792 wanted 749219 found 773951
May 29 10:12:17 matrix-nas-rmt kernel: BTRFS error (device md127): parent transid verify failed on 304545792 wanted 749219 found 773951
May 29 10:12:17 matrix-nas-rmt kernel: BTRFS warning (device md127): Skipping commit of aborted transaction.
May 29 10:12:17 matrix-nas-rmt kernel: BTRFS: error (device md127) in cleanup_transaction:1864: errno=-5 IO failure
May 29 10:12:17 matrix-nas-rmt kernel: BTRFS info (device md127): forced readonly
May 29 10:12:17 matrix-nas-rmt kernel: BTRFS info (device md127): delayed_refs has NO entry
May 29 10:12:17 matrix-nas-rmt kernel: VFS:Filesystem freeze failed
May 29 10:12:19 matrix-nas-rmt kernel: BTRFS error (device md127): Remounting read-write after error is not allowed
StephenB
May 30, 2019Guru - Experienced User
So you do have btrfs errors and a disk i/o failure.
If rebuilding is easy, I'd just do that. If you have a lot of users to configure, you can destroy the data volume and recreate it.
- tglazierMay 30, 2019Aspirant
Is there an easy way to identify which disk is having the issue? Other than pulling each disk and running manufactures tool on them?
- StephenBMay 30, 2019Guru - Experienced User
One easy thing to do is look in disk_info.log in the log zip file. That will give you the SMART statistics for each disk.
You could also enable ssh, and look for disk errors by entering
# smartctl -x /dev/sda
(repeating for each disk you have).
That will give more information than is in disk_info.log. If there is a section for one of the disks that includes stuff like
Error XX [YY] occurred at disk power-on lifetime: ZZ hours (ZZ days + ZZ hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were:
then take a closer look - especially if the time is recent.
- tglazierMay 31, 2019Aspirant
I accessed the disks through ssh and found disk 3 is having the issue, is it possible to just replace the disk and let the raid rebuilt itself?
Error 5 [4] occurred at disk power-on lifetime: 24251 hours (1010 days + 11 hour s)
When the command that caused the error occurred, the device was active or idle .After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 09 e7 bb 50 40 00 Error: UNC at LBA = 0x09e7bb50 = 16618 1712Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 00 50 00 50 00 00 09 e7 a5 f0 40 08 19d+16:31:11.371 READ FPDMA QUEUED
60 00 80 00 48 00 00 09 e7 bb 40 40 08 19d+16:31:11.371 READ FPDMA QUEUED
60 00 78 00 40 00 00 09 e7 bb c8 40 08 19d+16:31:11.370 READ FPDMA QUEUED
60 00 08 00 38 00 00 09 e7 bb c0 40 08 19d+16:31:11.370 READ FPDMA QUEUED
ea 00 00 00 00 00 00 00 00 00 00 e0 08 19d+16:31:11.349 FLUSH CACHE EXTError 4 [3] occurred at disk power-on lifetime: 24251 hours (1010 days + 11 hour s)
When the command that caused the error occurred, the device was active or idle .After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 09 e7 bb 50 40 00 Error: UNC at LBA = 0x09e7bb50 = 16618 1712Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 01 80 00 e8 00 00 09 e7 ba c0 40 08 19d+16:31:07.608 READ FPDMA QUEUED
ea 00 00 00 00 00 00 00 00 00 00 e0 08 19d+16:31:07.587 FLUSH CACHE EXT
61 00 08 00 d0 00 00 01 14 ee 48 40 08 19d+16:31:07.587 WRITE FPDMA QUEUED
61 00 01 00 c8 00 00 00 00 00 48 40 08 19d+16:31:07.587 WRITE FPDMA QUEUED
ea 00 00 00 00 00 00 00 00 00 00 e0 08 19d+16:31:07.587 FLUSH CACHE EXTError 3 [2] occurred at disk power-on lifetime: 24251 hours (1010 days + 11 hour s)
When the command that caused the error occurred, the device was active or idle .After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 09 e7 a5 f0 40 00 Error: WP at LBA = 0x09e7a5f0 = 166176 240Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
61 00 40 00 58 00 00 00 09 31 40 40 08 19d+16:31:05.077 WRITE FPDMA QUEUED
61 00 40 00 50 00 00 00 02 cc c0 40 08 19d+16:31:05.077 WRITE FPDMA QUEUED
61 00 08 00 48 00 00 00 16 09 58 40 08 19d+16:31:05.077 WRITE FPDMA QUEUED
60 00 80 00 40 00 00 09 e7 a5 c0 40 08 19d+16:31:05.077 READ FPDMA QUEUED
ea 00 00 00 00 00 00 00 00 00 00 e0 08 19d+16:31:05.059 FLUSH CACHE EXTError 2 [1] occurred at disk power-on lifetime: 24251 hours (1010 days + 11 hour s)
When the command that caused the error occurred, the device was active or idle .After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 09 e7 a5 f0 40 00 Error: UNC at LBA = 0x09e7a5f0 = 16617 6240Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 01 00 00 48 00 00 09 e7 ae c0 40 08 19d+16:31:02.465 READ FPDMA QUEUED
60 01 80 00 40 00 00 09 e7 ac c0 40 08 19d+16:31:02.465 READ FPDMA QUEUED
60 01 80 00 38 00 00 09 e7 aa c0 40 08 19d+16:31:02.463 READ FPDMA QUEUED
60 01 80 00 30 00 00 09 e7 a8 c0 40 08 19d+16:31:02.463 READ FPDMA QUEUED
60 01 80 00 28 00 00 09 e7 a6 c0 40 08 19d+16:31:02.463 READ FPDMA QUEUEDError 1 [0] occurred at disk power-on lifetime: 18710 hours (779 days + 14 hours )
When the command that caused the error occurred, the device was active or idle .After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
40 -- 51 00 00 00 00 07 c6 67 e0 40 00 Error: UNC at LBA = 0x07c667e0 = 13044 3232Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
60 00 80 00 c8 00 00 07 c6 7c c0 40 08 37d+08:55:54.705 READ FPDMA QUEUED
60 01 80 00 c0 00 00 07 c6 7a c0 40 08 37d+08:55:54.705 READ FPDMA QUEUED
60 01 80 00 b8 00 00 07 c6 78 c0 40 08 37d+08:55:54.705 READ FPDMA QUEUED
60 01 80 00 b0 00 00 07 c6 76 c0 40 08 37d+08:55:54.705 READ FPDMA QUEUED
60 01 80 00 a8 00 00 07 c6 74 c0 40 08 37d+08:55:54.705 READ FPDMA QUEUEDSMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!