× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

ReadyNAS Pro 6 crashed again

StephenB
Guru

Re: ReadyNAS Pro 6 crashed again


@tony359 wrote:

 

Is there a way to do an offline test of my drives? 🙂


There is an on-line test in the maintenance menu you can use.  That runs the full built-in smart test on all the drives in the volume.

 

You can also use smartctl -x /dev/sda from ssh to see more errors (UNCs in particular) on sda (or whatever disk you wish),

 

As far as off-line goes, the simplest way is to connect the drive to a Windows PC and run the vendor diag - Dashboard for WDC, and Seatools for Seagate.  Unfortunately they don't run on MacOS.

 

But it seems to me that your symptoms are pointing either to the switch or perhaps the cable going from the NAS to the switch.  It's always the NIC port connected to that switch that fails, and the other NIC always continues to work fine. 

Message 126 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

Hi Stephen,

 

No, the ports were swapped last time - also the switch and the cable. So it's not a NIC or Network issue. Well. It ALWAYS fails on that NETWORK so it could be something on my main network. But on this occasion the NAS was wired to the main switch on another port and through an additional switch. So if it's something with that network, it's not a HW issue. 

 

The online maintenance runs periodically. The logs show an "offline" test though. How should I read that? The drive is now 51888hrs.

 

 

SMART Extended Self-test Log Version: 1 (1 sectors)
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Interrupted (host reset)      90%     50227         -
# 2  Extended offline    Completed without error       00%     48081         -
# 3  Extended offline    Completed without error       00%     45875         -
# 4  Extended offline    Completed without error       00%     43691         -
# 5  Extended offline    Completed without error       00%     41536         -
# 6  Extended offline    Completed without error       00%     39834         -
# 7  Extended offline    Completed without error       00%     37636         -
# 8  Extended offline    Completed without error       00%     35455         -
# 9  Extended offline    Completed without error       00%     33273         -
#10  Extended offline    Completed without error       00%     31118         -
#11  Extended offline    Completed without error       00%     28912         -
#12  Extended offline    Completed without error       00%     26707         -
#13  Extended offline    Completed without error       00%     24525         -
#14  Extended offline    Completed without error       00%     22554         -
#15  Extended offline    Completed without error       00%     20712         -
#16  Extended offline    Completed without error       00%     19182         -
#17  Short offline       Completed without error       00%        82         -
#18  Short offline       Completed without error       00%        63         -

 

I ran smartctl -x in the past and posted the output here earlier on this thread. I didn't spot anything but I am not an expert. There are UNC errors on SDA (which I now moved to SDE) but at 7872 hours, a few years ago! 🙂

 

 

Error 159 [14] occurred at disk power-on lifetime: 7872 hours (328 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 4b 2b cc 40 40 00  Error: WP at LBA = 0x4b2bcc40 = 1261161536

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  61 04 00 00 08 00 00 4b 2b c8 40 40 08     14:59:14.849  WRITE FPDMA QUEUED
  60 04 00 00 00 00 00 4b 2b cc 40 40 08     14:59:14.849  READ FPDMA QUEUED
  ef 00 10 00 02 00 00 00 00 00 00 a0 08     14:59:14.849  SET FEATURES [Enable SATA feature]
  27 00 00 00 00 00 00 00 00 00 00 e0 08     14:59:14.849  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 00 00 00 00 00 a0 08     14:59:14.849  IDENTIFY DEVICE

Error 158 [13] occurred at disk power-on lifetime: 7872 hours (328 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 4b 2b cc 40 40 00  Error: UNC at LBA = 0x4b2bcc40 = 1261161536

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 04 00 00 08 00 00 4b 2b cc 40 40 08     14:59:11.031  READ FPDMA QUEUED
  61 04 00 00 00 00 00 4b 2b c8 40 40 08     14:59:11.031  WRITE FPDMA QUEUED
  ef 00 10 00 02 00 00 00 00 00 00 a0 08     14:59:11.031  SET FEATURES [Enable SATA feature]
  27 00 00 00 00 00 00 00 00 00 00 e0 08     14:59:11.031  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 00 00 00 00 00 a0 08     14:59:11.030  IDENTIFY DEVICE

Error 157 [12] occurred at disk power-on lifetime: 7872 hours (328 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 4b 2b cc 40 40 00  Error: WP at LBA = 0x4b2bcc40 = 1261161536

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  61 04 00 00 08 00 00 4b 2b c8 40 40 08     14:59:07.223  WRITE FPDMA QUEUED
  60 04 00 00 00 00 00 4b 2b cc 40 40 08     14:59:07.223  READ FPDMA QUEUED
  ef 00 10 00 02 00 00 00 00 00 00 a0 08     14:59:07.223  SET FEATURES [Enable SATA feature]
  27 00 00 00 00 00 00 00 00 00 00 e0 08     14:59:07.223  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 00 00 00 00 00 a0 08     14:59:07.223  IDENTIFY DEVICE

Error 156 [11] occurred at disk power-on lifetime: 7872 hours (328 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 4b 2b cc 40 40 00  Error: UNC at LBA = 0x4b2bcc40 = 1261161536

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 04 00 00 08 00 00 4b 2b cc 40 40 08     14:59:03.405  READ FPDMA QUEUED
  61 04 00 00 00 00 00 4b 2b c8 40 40 08     14:59:03.405  WRITE FPDMA QUEUED
  ef 00 10 00 02 00 00 00 00 00 00 a0 08     14:59:03.405  SET FEATURES [Enable SATA feature]
  27 00 00 00 00 00 00 00 00 00 00 e0 08     14:59:03.405  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 00 00 00 00 00 a0 08     14:59:03.405  IDENTIFY DEVICE

Error 155 [10] occurred at disk power-on lifetime: 7872 hours (328 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 4b 2b cc 40 40 00  Error: WP at LBA = 0x4b2bcc40 = 1261161536

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  61 04 00 00 08 00 00 4b 2b c8 40 40 08     14:58:59.720  WRITE FPDMA QUEUED
  60 04 00 00 00 00 00 4b 2b cc 40 40 08     14:58:59.720  READ FPDMA QUEUED
  ef 00 10 00 02 00 00 00 00 00 00 a0 08     14:58:59.720  SET FEATURES [Enable SATA feature]
  27 00 00 00 00 00 00 00 00 00 00 e0 08     14:58:59.720  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 00 00 00 00 00 a0 08     14:58:59.719  IDENTIFY DEVICE

Error 154 [9] occurred at disk power-on lifetime: 7872 hours (328 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 4b 2b cc 40 40 00  Error: UNC at LBA = 0x4b2bcc40 = 1261161536

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 04 00 00 08 00 00 4b 2b cc 40 40 08     14:58:55.900  READ FPDMA QUEUED
  61 04 00 00 00 00 00 4b 2b c8 40 40 08     14:58:55.900  WRITE FPDMA QUEUED
  ea 00 00 00 00 00 00 00 00 00 00 e0 08     14:58:55.873  FLUSH CACHE EXT
  60 00 08 00 08 00 00 00 7f 22 18 40 08     14:58:55.838  READ FPDMA QUEUED
  61 00 02 00 00 00 00 00 00 00 48 40 08     14:58:55.838  WRITE FPDMA QUEUED

Error 153 [8] occurred at disk power-on lifetime: 7872 hours (328 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 4b 2b c8 40 40 00  Error: UNC at LBA = 0x4b2bc840 = 1261160512

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 04 00 00 00 00 00 4b 2b c8 40 40 08     14:58:52.283  READ FPDMA QUEUED
  ef 00 10 00 02 00 00 00 00 00 00 a0 08     14:58:52.283  SET FEATURES [Enable SATA feature]
  27 00 00 00 00 00 00 00 00 00 00 e0 08     14:58:52.283  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 00 00 00 00 00 a0 08     14:58:52.282  IDENTIFY DEVICE
  ef 00 03 00 46 00 00 00 00 00 00 a0 08     14:58:52.282  SET FEATURES [Set transfer mode]

Error 152 [7] occurred at disk power-on lifetime: 7872 hours (328 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 00 4b 2b c8 40 40 00  Error: UNC at LBA = 0x4b2bc840 = 1261160512

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 04 00 00 00 00 00 4b 2b c8 40 40 08     14:58:48.786  READ FPDMA QUEUED
  ef 00 10 00 02 00 00 00 00 00 00 a0 08     14:58:48.786  SET FEATURES [Enable SATA feature]
  27 00 00 00 00 00 00 00 00 00 00 e0 08     14:58:48.786  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 00 00 00 00 00 a0 08     14:58:48.786  IDENTIFY DEVICE
  ef 00 03 00 46 00 00 00 00 00 00 a0 08     14:58:48.786  SET FEATURES [Set transfer mode]

 

 

I am Windows so that's fine but wouldn't be better to run the tests on a Linux system so the file system can be checked as well? Also I think I think I'd prefer the disks to stay unmounted so I know I have less chances of damaging the RAID. 

 

Can I start the NAS from a Debian live-USB? I could run the checks from there, assuming VGA works there.  And what do you think of that suggestion of running btrfs-check on the drives? I don't dislike the idea of checking the file system.

 

@schumaku 

The NAS disappeared again so I've run dmseg and it's attached (this forum lacks the ability to attach text files!).

 

Do I see lots of network going down messages after what seems to be a gap? And both ETH0 and ETH1. 

 

Disabling and re-enabling ETH0 worked as usual.

And yes, I've now disabled IPv6 (it got re-enabled when I swapped the IPs I think)

 

Message 127 of 191
Sandshark
Sensei

Re: ReadyNAS Pro 6 crashed again

Yes, you can start a legacy NAS from a Debian Live USB (or even DOS or Windows).  Native OS6 models are more picky about what they will start from.

Message 128 of 191
StephenB
Guru

Re: ReadyNAS Pro 6 crashed again


@tony359 wrote:

 

I am Windows so that's fine but wouldn't be better to run the tests on a Linux system so the file system can be checked as well? Also I think I think I'd prefer the disks to stay unmounted so I know I have less chances of damaging the RAID. 

 

 


I don't think so.  If you needed that, I'd do it in the NAS.

 

I really don't see how this can be the disks or the file system.  If it were, the second NIC wouldn't be responsive when the problem occurs.  Plus normal operation wouldn't resume when you set the interface down and then up again.

 


@tony359 wrote:

 

No, the ports were swapped last time - also the switch and the cable. So it's not a NIC or Network issue. Well. It ALWAYS fails on that NETWORK so it could be something on my main network.

 I think definitely a network issue, though perhaps not the physical layer.  The puzzle is what.

 

Are you using the NAS differently on the main network than you are on the PC connection?

 

The history here is of course extensive, and I'm have trouble keeping everything straight.  Did the NAS ever lock up when it was only connected to the main network (with the PC NIC disconnected)?

 


@tony359 wrote:

 

The online maintenance runs periodically. The logs show an "offline" test though. How should I read that? The drive is now 51888hrs.

 

The "extended offline" record is actually the test you run from the maintenance settings.  No idea why is it described as "offline" by smartctl.

 

You should also see it at the end of volume.log.  It looks like the NAS crashed (or was shut down) before the most recent test finished.

Message 129 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

>I don't think so.  If you needed that, I'd do it in the NAS.

 

>I really don't see how this can be the disks or the file system.  If it were, the second NIC wouldn't be responsive when the >problem occurs.  Plus normal operation wouldn't resume when you set the interface down and then up again.

 

I appreciate your view and I don't disagree with it.

But this has been going on for months and I've tried many things short of a new set of HDDs. 

 

Before I start messing up with my data I'd like to exhaust all the options.

 

One of them is to do an offline check via Live-CD. As I am not super-skilled with Linux and I care about my data, can someone roughly guide me so I don't obliterate my data 🙂

 

I guess I'll boot from a Live USB, the 5 RAID HDDs are not going to be mounted by default. 

I can then run

 

btrfs-check --readonly /dev/sd(x)

 

This should check the file system? 

 

Then smartctl -t long /dev/sd(x)

 

Anything else anybody can think I should do while the HDDs are offline?

 

>I think definitely a network issue, though perhaps not the physical layer.  The puzzle is what.

>Are you using the NAS differently on the main network than you are on the PC connection?

>The history here is of course extensive, and I'm have trouble keeping everything straight.  Did the NAS ever lock up >when it was only connected to the main network (with the PC NIC disconnected)?

 

The PC and the NAS are plugged on the same switch. There is nothing running on the NAS. I only use it as File System.

I appreciate the history is long and I thank you for bearing with me for so long and not suggesting I should go buy a Qnap 🙂

 

The second NIC connected to the PC is a recent addition as I discovered that when the NAS disappears I can still access it via the other NIC. The behaviour hasn't changed since I also plugged the PC directly into the NAS. 

 

Months ago, the NAS stopped misbehaving when I completely disconnected it from ANY networks. 

A week later I plugged it into the PC only (no main network, no internet)

Some weeks of good behaviour later, I put the NAS back on the main network, removing some port forwarding I had in the main router.

 

It worked PERFECTLY for 2 months. 

 

Then it started disappearing twice a day. Out of the blue. 

 

This is why I am pursuing unlikely routes: the above events point to NOTHING! 🙂 

Message 130 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

quick addendum:

I've made a live-USB of Debian, played with it and a random HDD which I formatted btrfs. 

If anybody has any suggestions on what to test while offline, please do let me know!

 

Also, if someone has any suggestions on what NOT to do while playing with those HDD, please also do let me know!

Message 131 of 191
StephenB
Guru

Re: ReadyNAS Pro 6 crashed again

You'd need to assemble the RAID array and mount it in order to run btrfs check.

 

Since your system boots, you can just run ssh (logging in as root), and run btrfs check from there.  The device would be /dev/md127 (the raid array virtual disk).

 

Use --force because the file system is mounted.  It won't try to repair anything, so no need to worry about read-only.  Don't write anything to the data volume while it is running.

 

root@RN102:~# btrfs check --force /dev/md127
WARNING: filesystem mounted, continuing because of --force
Checking filesystem on /dev/md127
...

 

 

You can also run smartctl from ssh (/dev/sda, etc), so need to use the liveCD there either.

 

root@RN102:~# smartctl --test=long /dev/sda
smartctl 6.6 2017-11-05 r4594 [armv7l-linux-4.4.218.armada.1] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Extended self-test routine immediately in off-line mode".
Drive command "Execute SMART Extended self-test routine immediately in off-line mode" successful.
Testing has begun.
Please wait 127 minutes for test to complete.
Test will complete after Tue Jun 13 20:35:03 2023

Use smartctl -X to abort test.
root@RN102:~#

 


Message 132 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

Re btrfs I was thinking that exactly - how can the OS check files if they're part of a raid?

But I was under the impression that the disks should be unmounted in order for those checks to be properly done?

 

I'm referring to this message: https://community.netgear.com/t5/Using-your-ReadyNAS-in-Business/ReadyNAS-Pro-6-crashed-again/m-p/23...

 

Also, 126 is the main data volume, should I also check the ones where the OS is stored? I have MD0 (4GB), MD1 (1.3GB), MD127 (1.8TB), MD126 (14.5TB).

Message 133 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

First disk passed long smart succesfully, now onto second one.

 

Meanwhile the NAS disappeared. I SSH via the second port and I revived it the usual way. 

 

DMESG adds the following from yesterday

 

[Tue Jun 13 16:14:02 2023] eth1: network connection down
[Tue Jun 13 16:14:05 2023] eth1: network connection up using port A
[Tue Jun 13 16:14:05 2023]     interrupt src:   MSI
[Tue Jun 13 16:14:05 2023]     speed:           10
[Tue Jun 13 16:14:05 2023]     autonegotiation: yes
[Tue Jun 13 16:14:05 2023]     duplex mode:     full
[Tue Jun 13 16:14:05 2023]     flowctrl:        none
[Tue Jun 13 16:14:05 2023]     tcp offload:     enabled
[Tue Jun 13 16:14:05 2023]     scatter-gather:  enabled
[Tue Jun 13 16:14:05 2023]     tx-checksum:     enabled
[Tue Jun 13 16:14:05 2023]     rx-checksum:     enabled
[Tue Jun 13 16:14:05 2023]     rx-polling:      enabled
[Tue Jun 13 18:08:49 2023] eth1: network connection down
[Tue Jun 13 18:08:54 2023] eth1: network connection up using port A
[Tue Jun 13 18:08:54 2023]     interrupt src:   MSI
[Tue Jun 13 18:08:54 2023]     speed:           1000
[Tue Jun 13 18:08:54 2023]     autonegotiation: yes
[Tue Jun 13 18:08:54 2023]     duplex mode:     full
[Tue Jun 13 18:08:54 2023]     flowctrl:        symmetric
[Tue Jun 13 18:08:54 2023]     role:            slave
[Tue Jun 13 18:08:54 2023]     tcp offload:     enabled
[Tue Jun 13 18:08:54 2023]     scatter-gather:  enabled
[Tue Jun 13 18:08:54 2023]     tx-checksum:     enabled
[Tue Jun 13 18:08:54 2023]     rx-checksum:     enabled
[Tue Jun 13 18:08:54 2023]     rx-polling:      enabled
[Tue Jun 13 18:55:48 2023] eth1: network connection down
[Tue Jun 13 18:55:51 2023] eth1: network connection up using port A
[Tue Jun 13 18:55:51 2023]     interrupt src:   MSI
[Tue Jun 13 18:55:51 2023]     speed:           10
[Tue Jun 13 18:55:51 2023]     autonegotiation: yes
[Tue Jun 13 18:55:51 2023]     duplex mode:     full
[Tue Jun 13 18:55:51 2023]     flowctrl:        none
[Tue Jun 13 18:55:51 2023]     tcp offload:     enabled
[Tue Jun 13 18:55:51 2023]     scatter-gather:  enabled
[Tue Jun 13 18:55:51 2023]     tx-checksum:     enabled
[Tue Jun 13 18:55:51 2023]     rx-checksum:     enabled
[Tue Jun 13 18:55:51 2023]     rx-polling:      enabled
[Tue Jun 13 18:55:53 2023] eth1: network connection down
[Tue Jun 13 18:55:56 2023] eth1: network connection up using port A
[Tue Jun 13 18:55:56 2023]     interrupt src:   MSI
[Tue Jun 13 18:55:56 2023]     speed:           1000
[Tue Jun 13 18:55:56 2023]     autonegotiation: yes
[Tue Jun 13 18:55:56 2023]     duplex mode:     full
[Tue Jun 13 18:55:56 2023]     flowctrl:        symmetric
[Tue Jun 13 18:55:56 2023]     role:            master
[Tue Jun 13 18:55:56 2023]     tcp offload:     enabled
[Tue Jun 13 18:55:56 2023]     scatter-gather:  enabled
[Tue Jun 13 18:55:56 2023]     tx-checksum:     enabled
[Tue Jun 13 18:55:56 2023]     rx-checksum:     enabled
[Tue Jun 13 18:55:56 2023]     rx-polling:      enabled
[Tue Jun 13 18:58:30 2023] eth1: network connection down
[Tue Jun 13 18:58:39 2023] eth1: network connection up using port A
[Tue Jun 13 18:58:39 2023]     interrupt src:   MSI
[Tue Jun 13 18:58:39 2023]     speed:           1000
[Tue Jun 13 18:58:39 2023]     autonegotiation: yes
[Tue Jun 13 18:58:39 2023]     duplex mode:     full
[Tue Jun 13 18:58:39 2023]     flowctrl:        symmetric
[Tue Jun 13 18:58:39 2023]     role:            master
[Tue Jun 13 18:58:39 2023]     tcp offload:     enabled
[Tue Jun 13 18:58:39 2023]     scatter-gather:  enabled
[Tue Jun 13 18:58:39 2023]     tx-checksum:     enabled
[Tue Jun 13 18:58:39 2023]     rx-checksum:     enabled
[Tue Jun 13 18:58:39 2023]     rx-polling:      enabled
[Tue Jun 13 19:03:51 2023] eth1: network connection down
[Tue Jun 13 19:04:50 2023] eth1: network connection up using port A
[Tue Jun 13 19:04:50 2023]     interrupt src:   MSI
[Tue Jun 13 19:04:50 2023]     speed:           1000
[Tue Jun 13 19:04:50 2023]     autonegotiation: yes
[Tue Jun 13 19:04:50 2023]     duplex mode:     full
[Tue Jun 13 19:04:50 2023]     flowctrl:        symmetric
[Tue Jun 13 19:04:50 2023]     role:            slave
[Tue Jun 13 19:04:50 2023]     tcp offload:     enabled
[Tue Jun 13 19:04:50 2023]     scatter-gather:  enabled
[Tue Jun 13 19:04:50 2023]     tx-checksum:     enabled
[Tue Jun 13 19:04:50 2023]     rx-checksum:     enabled
[Tue Jun 13 19:04:50 2023]     rx-polling:      enabled
[Tue Jun 13 19:40:18 2023] eth1: network connection down
[Tue Jun 13 19:40:20 2023] eth1: network connection up using port A
[Tue Jun 13 19:40:20 2023]     interrupt src:   MSI
[Tue Jun 13 19:40:20 2023]     speed:           10
[Tue Jun 13 19:40:20 2023]     autonegotiation: yes
[Tue Jun 13 19:40:20 2023]     duplex mode:     full
[Tue Jun 13 19:40:20 2023]     flowctrl:        none
[Tue Jun 13 19:40:20 2023]     tcp offload:     enabled
[Tue Jun 13 19:40:20 2023]     scatter-gather:  enabled
[Tue Jun 13 19:40:20 2023]     tx-checksum:     enabled
[Tue Jun 13 19:40:20 2023]     rx-checksum:     enabled
[Tue Jun 13 19:40:20 2023]     rx-polling:      enabled
[Tue Jun 13 19:40:23 2023] eth1: network connection down
[Tue Jun 13 19:40:26 2023] eth1: network connection up using port A
[Tue Jun 13 19:40:26 2023]     interrupt src:   MSI
[Tue Jun 13 19:40:26 2023]     speed:           1000
[Tue Jun 13 19:40:26 2023]     autonegotiation: yes
[Tue Jun 13 19:40:26 2023]     duplex mode:     full
[Tue Jun 13 19:40:26 2023]     flowctrl:        symmetric
[Tue Jun 13 19:40:26 2023]     role:            master
[Tue Jun 13 19:40:26 2023]     tcp offload:     enabled
[Tue Jun 13 19:40:26 2023]     scatter-gather:  enabled
[Tue Jun 13 19:40:26 2023]     tx-checksum:     enabled
[Tue Jun 13 19:40:26 2023]     rx-checksum:     enabled
[Tue Jun 13 19:40:26 2023]     rx-polling:      enabled
[Tue Jun 13 19:41:20 2023] eth1: network connection down
[Tue Jun 13 19:41:34 2023] eth1: network connection up using port A
[Tue Jun 13 19:41:34 2023]     interrupt src:   MSI
[Tue Jun 13 19:41:34 2023]     speed:           1000
[Tue Jun 13 19:41:34 2023]     autonegotiation: yes
[Tue Jun 13 19:41:34 2023]     duplex mode:     full
[Tue Jun 13 19:41:34 2023]     flowctrl:        symmetric
[Tue Jun 13 19:41:34 2023]     role:            master
[Tue Jun 13 19:41:34 2023]     tcp offload:     enabled
[Tue Jun 13 19:41:34 2023]     scatter-gather:  enabled
[Tue Jun 13 19:41:34 2023]     tx-checksum:     enabled
[Tue Jun 13 19:41:34 2023]     rx-checksum:     enabled
[Tue Jun 13 19:41:34 2023]     rx-polling:      enabled
[Tue Jun 13 20:30:37 2023] eth1: network connection down
[Tue Jun 13 20:31:35 2023] eth1: network connection up using port A
[Tue Jun 13 20:31:35 2023]     interrupt src:   MSI
[Tue Jun 13 20:31:35 2023]     speed:           1000
[Tue Jun 13 20:31:35 2023]     autonegotiation: yes
[Tue Jun 13 20:31:35 2023]     duplex mode:     full
[Tue Jun 13 20:31:35 2023]     flowctrl:        symmetric
[Tue Jun 13 20:31:35 2023]     role:            master
[Tue Jun 13 20:31:35 2023]     tcp offload:     enabled
[Tue Jun 13 20:31:35 2023]     scatter-gather:  enabled
[Tue Jun 13 20:31:35 2023]     tx-checksum:     enabled
[Tue Jun 13 20:31:35 2023]     rx-checksum:     enabled
[Tue Jun 13 20:31:35 2023]     rx-polling:      enabled
[Tue Jun 13 23:34:51 2023] eth1: network connection down
[Tue Jun 13 23:34:54 2023] eth1: network connection up using port A
[Tue Jun 13 23:34:54 2023]     interrupt src:   MSI
[Tue Jun 13 23:34:54 2023]     speed:           10
[Tue Jun 13 23:34:54 2023]     autonegotiation: yes
[Tue Jun 13 23:34:54 2023]     duplex mode:     full
[Tue Jun 13 23:34:54 2023]     flowctrl:        none
[Tue Jun 13 23:34:54 2023]     tcp offload:     enabled
[Tue Jun 13 23:34:54 2023]     scatter-gather:  enabled
[Tue Jun 13 23:34:54 2023]     tx-checksum:     enabled
[Tue Jun 13 23:34:54 2023]     rx-checksum:     enabled
[Tue Jun 13 23:34:54 2023]     rx-polling:      enabled
[Tue Jun 13 23:36:26 2023] eth1: network connection down
[Wed Jun 14 19:03:49 2023] eth1: network connection up using port A
[Wed Jun 14 19:03:49 2023]     interrupt src:   MSI
[Wed Jun 14 19:03:49 2023]     speed:           100
[Wed Jun 14 19:03:49 2023]     autonegotiation: yes
[Wed Jun 14 19:03:49 2023]     duplex mode:     full
[Wed Jun 14 19:03:49 2023]     flowctrl:        none
[Wed Jun 14 19:03:49 2023]     tcp offload:     enabled
[Wed Jun 14 19:03:49 2023]     scatter-gather:  enabled
[Wed Jun 14 19:03:49 2023]     tx-checksum:     enabled
[Wed Jun 14 19:03:49 2023]     rx-checksum:     enabled
[Wed Jun 14 19:03:49 2023]     rx-polling:      enabled
[Wed Jun 14 19:04:48 2023] eth1: network connection down
[Wed Jun 14 19:04:51 2023] eth1: network connection up using port A
[Wed Jun 14 19:04:51 2023]     interrupt src:   MSI
[Wed Jun 14 19:04:51 2023]     speed:           1000
[Wed Jun 14 19:04:51 2023]     autonegotiation: yes
[Wed Jun 14 19:04:51 2023]     duplex mode:     full
[Wed Jun 14 19:04:51 2023]     flowctrl:        symmetric
[Wed Jun 14 19:04:51 2023]     role:            slave
[Wed Jun 14 19:04:51 2023]     tcp offload:     enabled
[Wed Jun 14 19:04:51 2023]     scatter-gather:  enabled
[Wed Jun 14 19:04:51 2023]     tx-checksum:     enabled
[Wed Jun 14 19:04:51 2023]     rx-checksum:     enabled
[Wed Jun 14 19:04:51 2023]     rx-polling:      enabled
[Wed Jun 14 19:53:22 2023] eth0: network connection down
[Wed Jun 14 19:53:32 2023] eth0: network connection up using port A
[Wed Jun 14 19:53:32 2023]     interrupt src:   MSI
[Wed Jun 14 19:53:32 2023]     speed:           1000
[Wed Jun 14 19:53:32 2023]     autonegotiation: yes
[Wed Jun 14 19:53:32 2023]     duplex mode:     full
[Wed Jun 14 19:53:32 2023]     flowctrl:        symmetric
[Wed Jun 14 19:53:32 2023]     role:            slave
[Wed Jun 14 19:53:32 2023]     tcp offload:     enabled
[Wed Jun 14 19:53:32 2023]     scatter-gather:  enabled
[Wed Jun 14 19:53:32 2023]     tx-checksum:     enabled
[Wed Jun 14 19:53:32 2023]     rx-checksum:     enabled
[Wed Jun 14 19:53:32 2023]     rx-polling:      enabled

 

As you can see, ETH1 seems to go up and down. However, that's the "good" port, the one connected to my desktop. Fair enough, I switched off the computer at night and maybe in the afternoon when I went out. But only twice, not so many times. 

 

Uhm... 

Message 134 of 191
StephenB
Guru

Re: ReadyNAS Pro 6 crashed again

Interesting that eth1 is also going down.

 

I am wondering if there is anything else in the logs around Jun 14 19:53:22.  I'm thinking that you should check kernel.log, system.log, and systemd-journal.log.

 

Message 135 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

19:53 is me bringing it down via ifconfig.

 

The interface shows as UP even when it stops responding. When I SSH via ETH1 (the "good NIC") I can ping ETH0's IP but nothing else on that network. All I do is

 

ifconfig eth0 down

ifconfig eth0 up

 

and it comes back.

 

I haven't tested recently but unplugging/replugging the ethernet cable does NOT have the same effect.

 

I'm thinking of creating a cronjob which turns off and on ETH0 every 12 hours... LOL

Message 136 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

Checking logs anyways (the NAS has been disappearing multiple times tonight, every time the NIC trick worked) I see the below in system.log. Is that expected? There is more than that.

 

Jun 14 01:17:01 Enterprise-NAS CRON[30496]: pam_unix(cron:session): session opened for user root by (uid=0)
Jun 14 01:17:01 Enterprise-NAS CRON[30497]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 14 01:17:01 Enterprise-NAS CRON[30496]: pam_unix(cron:session): session closed for user root
Jun 14 01:40:52 Enterprise-NAS dbus[2876]: [system] Activating service name='org.opensuse.Snapper' (using servicehelper)
Jun 14 01:40:52 Enterprise-NAS dbus[2876]: [system] Successfully activated service 'org.opensuse.Snapper'
Jun 14 01:40:52 Enterprise-NAS snapperd[30745]: loading 269 failed
Jun 14 01:40:52 Enterprise-NAS snapperd[30745]: loading 270 failed
Jun 14 01:40:52 Enterprise-NAS snapperd[30745]: loading 987 failed
Jun 14 01:40:52 Enterprise-NAS snapperd[30745]: loading 1032 failed
Jun 14 01:40:52 Enterprise-NAS snapperd[30745]: loading 1085 failed
Jun 14 02:17:01 Enterprise-NAS CRON[31395]: pam_unix(cron:session): session opened for user root by (uid=0)
Jun 14 02:17:01 Enterprise-NAS CRON[31396]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 14 02:17:01 Enterprise-NAS CRON[31395]: pam_unix(cron:session): session closed for user root
Jun 14 02:40:53 Enterprise-NAS dbus[2876]: [system] Activating service name='org.opensuse.Snapper' (using servicehelper)
Jun 14 02:40:53 Enterprise-NAS dbus[2876]: [system] Successfully activated service 'org.opensuse.Snapper'
Jun 14 02:40:53 Enterprise-NAS snapperd[31638]: loading 269 failed
Jun 14 02:40:53 Enterprise-NAS snapperd[31638]: loading 270 failed
Jun 14 02:40:53 Enterprise-NAS snapperd[31638]: loading 987 failed
Jun 14 02:40:53 Enterprise-NAS snapperd[31638]: loading 1032 failed
Jun 14 02:40:53 Enterprise-NAS snapperd[31638]: loading 1085 failed
Jun 14 03:17:01 Enterprise-NAS CRON[32301]: pam_unix(cron:session): session opened for user root by (uid=0)
Jun 14 03:17:01 Enterprise-NAS CRON[32302]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 14 03:17:01 Enterprise-NAS CRON[32301]: pam_unix(cron:session): session closed for user root
Jun 14 03:40:54 Enterprise-NAS dbus[2876]: [system] Activating service name='org.opensuse.Snapper' (using servicehelper)
Jun 14 03:40:54 Enterprise-NAS dbus[2876]: [system] Successfully activated service 'org.opensuse.Snapper'
Jun 14 03:40:54 Enterprise-NAS snapperd[32543]: loading 269 failed
Jun 14 03:40:54 Enterprise-NAS snapperd[32543]: loading 270 failed
Jun 14 03:40:54 Enterprise-NAS snapperd[32543]: loading 987 failed
Jun 14 03:40:54 Enterprise-NAS snapperd[32543]: loading 1032 failed
Jun 14 03:40:54 Enterprise-NAS snapperd[32543]: loading 1085 failed
Jun 14 04:17:01 Enterprise-NAS CRON[737]: pam_unix(cron:session): session opened for user root by (uid=0)
Jun 14 04:17:01 Enterprise-NAS CRON[738]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 14 04:17:01 Enterprise-NAS CRON[737]: pam_unix(cron:session): session closed for user root
Jun 14 04:40:55 Enterprise-NAS dbus[2876]: [system] Activating service name='org.opensuse.Snapper' (using servicehelper)
Jun 14 04:40:55 Enterprise-NAS dbus[2876]: [system] Successfully activated service 'org.opensuse.Snapper'
Jun 14 04:40:55 Enterprise-NAS snapperd[1053]: loading 269 failed
Jun 14 04:40:55 Enterprise-NAS snapperd[1053]: loading 270 failed
Jun 14 04:40:55 Enterprise-NAS snapperd[1053]: loading 987 failed
Jun 14 04:40:55 Enterprise-NAS snapperd[1053]: loading 1032 failed
Jun 14 04:40:55 Enterprise-NAS snapperd[1053]: loading 1085 failed

 

 

 

Similar events on systemd-journal.log

Jun 14 15:41:07 Enterprise-NAS snapperd[11245]: loading 269 failed
Jun 14 15:41:07 Enterprise-NAS snapperd[11245]: loading 270 failed
Jun 14 15:41:07 Enterprise-NAS snapperd[11245]: loading 987 failed
Jun 14 15:41:07 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 15:41:07 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 15:41:07 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 15:41:07 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 15:41:07 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 15:41:07 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 15:41:07 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 15:41:07 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 15:41:07 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 15:41:07 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 15:41:07 Enterprise-NAS snapperd[11245]: loading 1032 failed
Jun 14 15:41:07 Enterprise-NAS snapperd[11245]: loading 1085 failed
Jun 14 16:17:01 Enterprise-NAS CRON[11891]: pam_unix(cron:session): session opened for user root by (uid=0)
Jun 14 16:17:01 Enterprise-NAS CRON[11892]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 14 16:17:01 Enterprise-NAS CRON[11891]: pam_unix(cron:session): session closed for user root
Jun 14 16:41:08 Enterprise-NAS dbus[2876]: [system] Activating service name='org.opensuse.Snapper' (using servicehelper)
Jun 14 16:41:08 Enterprise-NAS dbus[2876]: [system] Successfully activated service 'org.opensuse.Snapper'
Jun 14 16:41:08 Enterprise-NAS snapperd[12155]: loading 269 failed
Jun 14 16:41:08 Enterprise-NAS snapperd[12155]: loading 270 failed
Jun 14 16:41:08 Enterprise-NAS snapperd[12155]: loading 987 failed
Jun 14 16:41:08 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 16:41:08 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 16:41:08 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 16:41:08 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 16:41:08 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 16:41:08 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 16:41:08 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 16:41:08 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 16:41:08 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 16:41:08 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 16:41:08 Enterprise-NAS snapperd[12155]: loading 1032 failed
Jun 14 16:41:08 Enterprise-NAS snapperd[12155]: loading 1085 failed
Jun 14 17:17:01 Enterprise-NAS CRON[12811]: pam_unix(cron:session): session opened for user root by (uid=0)
Jun 14 17:17:01 Enterprise-NAS CRON[12812]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 14 17:17:01 Enterprise-NAS CRON[12811]: pam_unix(cron:session): session closed for user root
Jun 14 17:41:09 Enterprise-NAS dbus[2876]: [system] Activating service name='org.opensuse.Snapper' (using servicehelper)
Jun 14 17:41:09 Enterprise-NAS dbus[2876]: [system] Successfully activated service 'org.opensuse.Snapper'
Jun 14 17:41:09 Enterprise-NAS snapperd[13080]: loading 269 failed
Jun 14 17:41:09 Enterprise-NAS snapperd[13080]: loading 270 failed
Jun 14 17:41:09 Enterprise-NAS snapperd[13080]: loading 987 failed
Jun 14 17:41:09 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 17:41:09 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 17:41:09 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 17:41:09 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 17:41:09 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 17:41:09 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 17:41:09 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 17:41:09 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 17:41:09 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 17:41:09 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 17:41:09 Enterprise-NAS snapperd[13080]: loading 1032 failed
Jun 14 17:41:09 Enterprise-NAS snapperd[13080]: loading 1085 failed
Jun 14 17:42:11 Enterprise-NAS sshd[11072]: pam_unix(sshd:session): session closed for user root
Jun 14 18:17:01 Enterprise-NAS CRON[13730]: pam_unix(cron:session): session opened for user root by (uid=0)
Jun 14 18:17:01 Enterprise-NAS CRON[13731]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun 14 18:17:01 Enterprise-NAS CRON[13730]: pam_unix(cron:session): session closed for user root
Jun 14 18:41:10 Enterprise-NAS dbus[2876]: [system] Activating service name='org.opensuse.Snapper' (using servicehelper)
Jun 14 18:41:10 Enterprise-NAS dbus[2876]: [system] Successfully activated service 'org.opensuse.Snapper'
Jun 14 18:41:10 Enterprise-NAS snapperd[13989]: loading 269 failed
Jun 14 18:41:10 Enterprise-NAS snapperd[13989]: loading 270 failed
Jun 14 18:41:10 Enterprise-NAS snapperd[13989]: loading 987 failed
Jun 14 18:41:10 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 18:41:10 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 18:41:10 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 18:41:10 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 18:41:10 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 18:41:10 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 18:41:10 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 18:41:10 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 18:41:10 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty
Jun 14 18:41:10 Enterprise-NAS org.opensuse.Snapper[2876]: ^
Jun 14 18:41:10 Enterprise-NAS snapperd[13989]: loading 1032 failed
Jun 14 18:41:10 Enterprise-NAS snapperd[13989]: loading 1085 failed

 

Message 137 of 191
StephenB
Guru

Re: ReadyNAS Pro 6 crashed again


@tony359 wrote:

I have MD0 (4GB), MD1 (1.3GB), MD127 (1.8TB), MD126 (14.5TB).


Are you using FlexRAID or X-RAID?

Can you post a mdstat.log from the log zip file?

 

MD0 is the OS partition; MD1 is swap (and just raw storage, not BTRFS).

 


@tony359 wrote:

But I was under the impression that the disks should be unmounted in order for those checks to be properly done?

 


BTRFS check should work with the volume mounted if there are no disk writes being made to the volume.  

 

I don't recommend doing that for MD0, and and you can't unmount it.

 

If you want to attempt a repair, you could unmount the data volume (or if necessary, reboot in tech support mode, and manually assemble the RAID.   Alternatively, destroy the volume and recreate from the web ui.

 


@tony359 wrote:

Re btrfs I was thinking that exactly - how can the OS check files if they're part of a raid?

 


mdadm (software RAID) creates virtual disk(s), and BTRFS is layered on top of those disks.  

 


@tony359 wrote:
Jun 14 02:40:53 Enterprise-NAS snapperd[31638]: loading 269 failed
Jun 14 02:40:53 Enterprise-NAS snapperd[31638]: loading 270 failed
Jun 14 02:40:53 Enterprise-NAS snapperd[31638]: loading 987 failed
Jun 14 02:40:53 Enterprise-NAS snapperd[31638]: loading 1032 failed
Jun 14 02:40:53 Enterprise-NAS snapperd[31638]: loading 1085 failed

These are failures to load snapshots, so it does look like something might be happening with the file system.  But I don't think it's related to the crash.

 


@tony359 wrote:

 

The interface shows as UP even when it stops responding. When I SSH via ETH1 (the "good NIC") I can ping ETH0's IP but nothing else on that network. 


Somehow I missed that.  Probably a clue on what's going on, but I am not sure how to interpret it.

 

Are you using DHCP for eth0?  Or is there a static IP address configured on the NAS? 

 

 

 

 

Message 138 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

@StephenB 

How do you quote on this forum? 🙂 

The "quote" feature quotes the whole message and I haven't figured out a way to "break" the quote line to insert my comments! 

 

> Are you using FlexRAID or X-RAID? Can you post a mdstat.log from the log zip file?

 

Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md126 : active raid5 sde3[0] sda3[4] sdd3[3] sdc3[2] sdb3[1]
      15608675328 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
      
md127 : active raid1 sdd4[0] sda4[1]
      1953371904 blocks super 1.2 [2/2] [UU]
      
md1 : active raid10 sde2[0] sda2[4] sdd2[3] sdc2[2] sdb2[1]
      1305600 blocks super 1.2 512K chunks 2 near-copies [5/5] [UUUUU]
      
md0 : active raid1 sde1[0] sda1[4] sdd1[3] sdc1[2] sdb1[1]
      4190208 blocks super 1.2 [5/5] [UUUUU]
      
unused devices: <none>
/dev/md/0:
           Version : 1.2
     Creation Time : Sun Aug 26 00:34:51 2018
        Raid Level : raid1
        Array Size : 4190208 (4.00 GiB 4.29 GB)
     Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Wed Jun 14 19:53:56 2023
             State : clean 
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : unknown

              Name : 33ea1503:0  (local to host 33ea1503)
              UUID : a3f67404:2cd8011d:09213554:fc037d7f
            Events : 484

    Number   Major   Minor   RaidDevice State
       0       8       65        0      active sync   /dev/sde1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       4       8        1        4      active sync   /dev/sda1
/dev/md/1:
           Version : 1.2
     Creation Time : Wed Oct 28 18:16:31 2020
        Raid Level : raid10
        Array Size : 1305600 (1275.00 MiB 1336.93 MB)
     Used Dev Size : 522240 (510.00 MiB 534.77 MB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Wed Mar 29 01:00:17 2023
             State : clean 
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : unknown

              Name : 33ea1503:1  (local to host 33ea1503)
              UUID : e1dc0162:09789205:30db0ea7:1cbe88d0
            Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       66        0      active sync   /dev/sde2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       3       8       50        3      active sync   /dev/sdd2
       4       8        2        4      active sync   /dev/sda2
/dev/md/data-0:
           Version : 1.2
     Creation Time : Sun Aug 26 00:35:10 2018
        Raid Level : raid5
        Array Size : 15608675328 (14885.59 GiB 15983.28 GB)
     Used Dev Size : 3902168832 (3721.40 GiB 3995.82 GB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Wed Jun 14 19:11:02 2023
             State : clean 
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : unknown

              Name : 33ea1503:data-0  (local to host 33ea1503)
              UUID : d6f2d4fc:a4ed5491:5e974d0e:f44619ef
            Events : 7339

    Number   Major   Minor   RaidDevice State
       0       8       67        0      active sync   /dev/sde3
       1       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3
       4       8        3        4      active sync   /dev/sda3
/dev/md/data-1:
           Version : 1.2
     Creation Time : Wed Oct 28 18:16:34 2020
        Raid Level : raid1
        Array Size : 1953371904 (1862.88 GiB 2000.25 GB)
     Used Dev Size : 1953371904 (1862.88 GiB 2000.25 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jun 14 19:11:02 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : unknown

              Name : 33ea1503:data-1  (local to host 33ea1503)
              UUID : 1114a09f:c1c52914:f69c2626:a1a5ff09
            Events : 71

    Number   Major   Minor   RaidDevice State
       0       8       52        0      active sync   /dev/sdd4
       1       8        4        1      active sync   /dev/sda4

 

>BTRFS check should work with the volume mounted if there are no disk writes being made to the volume.  

>I don't recommend doing that for MD0, and and you can't unmount it.

> If you want to attempt a repair, you could unmount the data volume (or if necessary, reboot in tech support mode, and > manually assemble the RAID.   Alternatively, destroy the volume and recreate from the web ui.

 

Well I cannot be 100% sure there won't be disk writes being done.

I thought that checking the OS partition was the key - again see the message linked above which maybe is misleading?

 

Destroy and re-create the volume? How?

 

Why wouldn't you recommend a btrfs-check on MD0? Not questioning, just curious.

 

>These are failures to load snapshots, so it does look like something might be happening with the file system.  But I don't think it's related to the crash.

 

Maybe. But I think every time I found the NAS unresponsive, when I went to the log page to save logs I saw snapshots being deleted as last entries - on the UI logs. Probably a coincidence but...

 

>Somehow I missed that.  Probably a clue on what's going on, but I am not sure how to interpret it.

> Are you using DHCP for eth0?  Or is there a static IP address configured on the NAS? 

 

Nah, it's just that this has been going on forever! 🙂

There is a DHCP indeed but the NAS is set to static. The IP address I am using is outside of the DHCP pool. 

 

 

Message 139 of 191
StephenB
Guru

Re: ReadyNAS Pro 6 crashed again


@tony359 wrote:

How do you quote on this forum?  

The "quote" feature quotes the whole message and I haven't figured out a way to "break" the quote line to insert my comments! 

 


Once you have the full quote, you can edit out the parts you don't want.  Though it can be quirky - you can't quote the code inserts (</>) at all for some reason, and sometimes it is hard to position the cursor. Also, sometimes bugs in the forum software result in messed up markup text, which give you errors when you post.  I switch into html mode when I run into those problems, and manually fix things up.

 

Simplifying mdstat shows this:

 

 

/dev/md/data-0:
           Version : 1.2
     Creation Time : Sun Aug 26 00:35:10 2018
        Raid Level : raid5
        Array Size : 15608675328 (14885.59 GiB 15983.28 GB)

    Number   Major   Minor   RaidDevice State
       0       8       67        0      active sync   /dev/sde3
       1       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3
       4       8        3        4      active sync   /dev/sda3
/dev/md/data-1:
           Version : 1.2
     Creation Time : Wed Oct 28 18:16:34 2020
        Raid Level : raid1
        Array Size : 1953371904 (1862.88 GiB 2000.25 GB)


    Number   Major   Minor   RaidDevice State
       0       8       52        0      active sync   /dev/sdd4
       1       8        4        1      active sync   /dev/sda4

 

 



This would be XRAID, since there is vertical expansion of two disks - sda, and sdd.  There are two RAID groups data-0 and data-1. 

 

Data-0 (md126) was created first (back in 2018), and shows that you originally had a 5x4TB array.  Data-1 (md127) was created in Oct 2020, when you expanded by replacing sda and sdd with 6 TB drives.  You can see that the NAS created additional partitions on these disks (sda4 and sdd4), and created a RAID-1 array on the additional space.

 

These two groups are concatenated by BTRFS into a single data volume.

 

FWIW, back in 2018, Data-0 was md127.  When you expanded, data-1 became md127, and data-0 became 126.

 

Though I haven't tested the BTRFS commands extensively, I believe checking md127 will test the entire concatenated volume.  You can also try checking md126 if you want - since there is no repair, it is safe.

 


@tony359 wrote:

 

Well I cannot be 100% sure there won't be disk writes being done.

Why wouldn't you recommend a btrfs-check on MD0

 


While you can't be certain on the data volume, the risk would be that btrfs check would give you spurious errors (since there is no repair happening).  You can minimize that risk by disconnecting eth0. You could also try umount /data and then run the test.  The NAS could log spurious errors when readynasd tries to access the unmounted volume, so you'd need to ignore those.  Since the volume wouldn't be mounted, you could add the --repair option to the check.  When done, it'd be best to reboot the NAS.

 

As far as the OS partition goes, linux is installed on that partition.  Odds of writes happening during the check are much higher.  You can check md0 off-line by rebooting the NAS in tech support mode.  You'd manually assemble md0, and not mount it.  Linux would be running off the boot flash, so as long as btrfs is fully installed there (which it should be), you can run the check.

 

The repair attempt is risky, so it is important to have a full backup before you try it.  Obviously destroying the volume would also require a full backup.

 


@tony359 wrote:

I thought that checking the OS partition was the key - again see the message linked above which maybe is misleading?

 


These errors 

Jun 14 17:41:09 Enterprise-NAS org.opensuse.Snapper[2876]: :1: parser error : Document is empty

do suggest something isn't quite right on the OS partition. The failure to load the snapshots also could be related to a mildly corrupted config stored on the OS partition. 

 

That doesn't necessarily mean that BTRFS file system for the OS is corrupted - something else might have gone wrong over the years.  And repairing the file system itself likely wouldn't repair the config files - it would just repair (or delete) internally inconsistent structures.

 


@tony359 wrote:

 

Destroy and re-create the volume? How?

 


The brute-force path is to do a factory default from the system settings page (or from the boot menu).  This re-formats the disks, and does a fresh OS install from the boot flash.  You'd then need to reconfigure the NAS and reinstall any apps, and restore the files from backup.

 

It is possible to save/restore the config to speed things up a bit, but since we know something is wrong with the snapshot setup, I don't recommend doing that.

 

You can also destroy the data volume by right-clicking the settings wheel on the volumes page, and choosing destroy.  Then select the disk from the center, and create a new one.  Uninstall apps first (as they are installed to the data volume).  When the new volume is created, you'd need to reconfigure the shares, reinstall apps, and recreate any backup jobs.

 

Since the default rebuilds everything, it is probably a good next step. It would give you confidence that the configuration and file systems are all completely clean.  Any other path will lead you to wonder if you missed something.

 

 

 

 

Message 140 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again


@StephenB wrote:

@tony359 wrote:

How do you quote on this forum?  

The "quote" feature quotes the whole message and I haven't figured out a way to "break" the quote line to insert my comments! 

 


Once you have the full quote, you can edit out the parts you don't want.  Though it can be quirky - you can't quote the code inserts (</>) at all for some reason, and sometimes it is hard to position the cursor. Also, sometimes bugs in the forum software result in messed up markup text, which give you errors when you post.  I switch into html mode when I run into those problems, and manually fix things up.

Ok so you manually go into HTML and add </blockquote> and <blockquote> when needed? What I cannot figure out is how to BREAK the quote to add my comment.

I did this manually on the HTML page but it's very messy and time consuming 🙂

 

Simplifying mdstat shows this:

 

 

 

/dev/md/data-0:
           Version : 1.2
     Creation Time : Sun Aug 26 00:35:10 2018
        Raid Level : raid5
        Array Size : 15608675328 (14885.59 GiB 15983.28 GB)

    Number   Major   Minor   RaidDevice State
       0       8       67        0      active sync   /dev/sde3
       1       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3
       4       8        3        4      active sync   /dev/sda3
/dev/md/data-1:
           Version : 1.2
     Creation Time : Wed Oct 28 18:16:34 2020
        Raid Level : raid1
        Array Size : 1953371904 (1862.88 GiB 2000.25 GB)


    Number   Major   Minor   RaidDevice State
       0       8       52        0      active sync   /dev/sdd4
       1       8        4        1      active sync   /dev/sda4

 

 



This would be XRAID, since there is vertical expansion of two disks - sda, and sdd.  There are two RAID groups data-0 and data-1. 

 

Data-0 (md126) was created first (back in 2018), and shows that you originally had a 5x4TB array.  Data-1 (md127) was created in Oct 2020, when you expanded by replacing sda and sdd with 6 TB drives.  You can see that the NAS created additional partitions on these disks (sda4 and sdd4), and created a RAID-1 array on the additional space.

I started with 2x4TB. Then added another 4TB Then added a 6TB Then added a 6TB

 

 

These two groups are concatenated by BTRFS into a single data volume.

 

FWIW, back in 2018, Data-0 was md127.  When you expanded, data-1 became md127, and data-0 became 126.

 

Though I haven't tested the BTRFS commands extensively, I believe checking md127 will test the entire concatenated volume.  You can also try checking md126 if you want - since there is no repair, it is safe.

thanks

 

Thanks for all the other advice - I just got lost in the HTML page so I'm adding a master comment here 🙂

 

I'll complete the SMART tests then I'll start doing some FS tests.

 

Regarding wiping the OS partition, yes, it's the nuclear option which was brought up several times over the months. 

I appreciate it's a good option but as I said it would mean transferring 12TB of data. 

 

My desktop has a 24port RAID card, I can pile up some HDDs I have around, make a BORG ship by my computer with power supplies, cables and drives and transfer everything there while I hit the red button on the NAS.

 

It's just that 

1. I'd rather not do that 🙂

2. it's a linux PC, I'm sure there is a way to fix it without wiping it (and I am in NO WAY questioning anybody's help or skills here, just moaning aloud!)

3. I have a feeling there is a good chance that I spend a week wiping the NAS and then the same issue happens again. 🙂

 

As the NAS seems to be in a state where it disappears every few hours, I **could** try replacing the HDDs with a couple random HDDs and factory reset for a test. But as I said, the NAS would work for 2 months without glitches in the past so I feel that that would also be an inconclusive test.

 

Oh my. 🙂 

Message 141 of 191
StephenB
Guru

Re: ReadyNAS Pro 6 crashed again


@tony359 wrote:

Ok so you manually go into HTML and add </blockquote> and <blockquote> when needed? What I cannot figure out is how to BREAK the quote to add my comment.

 


I only use html mode if something goes wrong.  As I tried to say, normally I just add my comments at the bottom of the full quote, and then remove the parts of the quote I don't want.  So I'm not "breaking" the quote.   I'm quoting multiple times, and removing text as I go.

 


@tony359 wrote:

I appreciate it's a good option but as I said it would mean transferring 12TB of data.

1. I'd rather not do that 🙂

 


One general comment - RAID isn't enough to keep your data safe.  So you should have a backup anyway.

 


@tony359 wrote:

It's just that 

1. I'd rather not do that 🙂

2. it's a linux PC, I'm sure there is a way to fix it without wiping it (and I am in NO WAY questioning anybody's help or skills here, just moaning aloud!)

3. I have a feeling there is a good chance that I spend a week wiping the NAS and then the same issue happens again. 🙂

 


Well, you've been chasing this problem for a long time, and we still don't know the cause.

 

So it is possible that you'll spend a few days rebuilding the NAS, and still not solve the problem.  But

  1. Offloading and restoring the data is largely unattended, so not a huge burden on your time.
  2. BTRFS repair is frankly dangerous, you can easily end up with data loss (or an unbootable NAS).  So you need to offload data either way.
  3. Even if the problem still happens, you will have ruled out corruption of the NAS software/file system - which is progress.  If your repair attempts don't work, then you'll never know if there is a repair needed that you missed.

 

 

 

 

 

Message 142 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again


@StephenB wrote:

@tony359 wrote:

Ok so you manually go into HTML and add </blockquote> and <blockquote> when needed? What I cannot figure out is how to BREAK the quote to add my comment.

 


I only use html mode if something goes wrong.  As I tried to say, normally I just add my comments at the bottom of the full quote, and then remove the parts of the quote I don't want.  So I'm not "breaking" the quote.   I'm quoting multiple times, and removing text as I go.

 



The clunkiest forum ever 😄 Sorry! 🙂

But thanks. Better than nothing!

 

 


@tony359 wrote:

I appreciate it's a good option but as I said it would mean transferring 12TB of data.

1. I'd rather not do that 🙂

 


One general comment - RAID isn't enough to keep your data safe.  So you should have a backup anyway.


 


All my data is backed up somewhere. Some is online. iDrive is not the best service available when it comes to restore data and my broadband is "only" 100Mbit. 

Restoring from online means lots of time.

The rest is on local HDDs. Doable but would take lots of time anyways.

 

Most of that time is unattended - but a hassle nevertheless. 

 

I 100% see your point on the rest (I'm already tired of quoting and deleting LOL). A fresh start is a fresh start. Most importantly it would rule out the SW straight away.

 

I'll think about the Borg ship.

Message 143 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

SMART longs tests came up clean on all drives.

BTRSFS check did not. The below is MD127

 

 

root@Enterprise-NAS:~# btrfs check --force /dev/md127
WARNING: filesystem mounted, continuing because of --force
Checking filesystem on /dev/md127
UUID: 256f20f5-bce4-4561-91a8-f5efb92f82fe
checking extents
checking free space cache
checking fs roots
root 17467 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 17495 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 17894 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 17919 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 17952 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 17962 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 17981 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 18045 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 18148 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
ERROR: errors found in fs roots
found 13917869395968 bytes used, error(s) found
total csum bytes: 501016
total tree bytes: 1743912960
total fs tree bytes: 1651933184
total extent tree bytes: 85065728
btree space waste bytes: 341607782
file data blocks allocated: 43039469133824
 referenced 32223623094272

 

 

...continued

Message 144 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

This is MD126

 

 

root@Enterprise-NAS:~# btrfs check --force /dev/md126
WARNING: filesystem mounted, continuing because of --force
Checking filesystem on /dev/md126
UUID: 256f20f5-bce4-4561-91a8-f5efb92f82fe
checking extents
checking free space cache
checking fs roots
root 17467 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 17495 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 17894 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 17919 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 17952 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 17962 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 17981 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 18045 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
root 18148 inode 1177157 errors 100, file extent discount
Found file extent holes:
        start: 4915200, len: 40960
        start: 4980736, len: 36864
        start: 5103616, len: 65536
        start: 5242880, len: 65536
        start: 5365760, len: 45056
        start: 5439488, len: 53248
        start: 5509120, len: 40960
        start: 5570560, len: 28672
        start: 5701632, len: 57344
        start: 5763072, len: 61440
        start: 5828608, len: 65536
        start: 5902336, len: 61440
        start: 6025216, len: 118784
        start: 6156288, len: 12288
        start: 6217728, len: 28672
        start: 6287360, len: 45056
        start: 6356992, len: 16384
        start: 6426624, len: 8192
        start: 6483968, len: 20480
        start: 6557696, len: 24576
        start: 6623232, len: 32768
        start: 6680576, len: 61440
        start: 6750208, len: 20480
        start: 6815744, len: 40960
        start: 6881280, len: 24576
        start: 6938624, len: 57344
        start: 7012352, len: 135168
        start: 7200768, len: 53248
        start: 7274496, len: 53248
        start: 7405568, len: 57344
        start: 7471104, len: 45056
        start: 7598080, len: 61440
        start: 7729152, len: 40960
        start: 7790592, len: 32768
        start: 7921664, len: 28672
        start: 7983104, len: 49152
        start: 8060928, len: 36864
        start: 8183808, len: 32768
        start: 8253440, len: 53248
        start: 8323072, len: 49152
        start: 8388608, len: 16384
        start: 8454144, len: 49152
        start: 8519680, len: 16384
        start: 8585216, len: 40960
        start: 8642560, len: 61440
        start: 8716288, len: 12288
        start: 8781824, len: 20480
        start: 8843264, len: 49152
        start: 8904704, len: 61440
        start: 9043968, len: 73728
        start: 9166848, len: 53248
        start: 9232384, len: 73728
        start: 9367552, len: 24576
        start: 9424896, len: 36864
        start: 9572352, len: 40960
        start: 9625600, len: 36864
        start: 9695232, len: 65536
        start: 9830400, len: 94208
ERROR: errors found in fs roots
found 13917869395968 bytes used, error(s) found
total csum bytes: 501016
total tree bytes: 1743912960
total fs tree bytes: 1651933184
total extent tree bytes: 85065728
btree space waste bytes: 341607782
file data blocks allocated: 43039469133824
 referenced 32223623094272

 

 

 

Message 145 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

(Sorry for splitting the message in three but there is a 20K limit per message and I cannot attach TXT files)

 

What do I learn from this?

 

I have now ordered the required cables to mount a bunch of HDDs onto my Desktop RAID card, I will transfer everything soon and I will reset the NAS. I give up 🙂

 

(That said, it's been behaving well recently, again!!)

 

Oh, one little gossip on the PSU: I think I am responsible for the swollen caps! When I replaced the fan, I powered it from the PSU itself. It's a noctua so little airflow. I noticed the PSU powers it at 5V (it's a 12V fan, unless I made a mistake and the original fan was 5V?).

 

So the bottom line is: i baked that PSU 100%. I tested it a bit and those heatsinks were scorcing hot while running. I re-wired the fan so it's going to be powered from 12V somewhere else. 

Only thing I am going to do differently is to reverse the airflow: I know it's supposed to pull air IN, but I don't agree with that. I think pushing air OUT is a better option as as it stands the PSU is pulling in the hot air expelled from the main fan.

Message 146 of 191
schumaku
Guru

Re: ReadyNAS Pro 6 crashed again


@tony359 wrote:

(Sorry for splitting the message in three but there is a 20K limit per message and I cannot attach TXT files)

What do I learn from this?.


The added value of these logs is over-estimated. Nothing we can take out of the full logs here.

 


@tony359 wrote:

Oh, one little gossip on the PSU: I think I am responsible for the swollen caps! When I replaced the fan, I powered it from the PSU itself. It's a noctua so little airflow. I noticed the PSU powers it at 5V (it's a 12V fan, unless I made a mistake and the original fan was 5V?).


A bunch of popular mistakes combined. No that people sometimes don't read voltage specs - much more the **** idea replacing stock fans by Noctua fans (very popular, must be a good choice because of hey, these are soooo silent). These are silent, because of the rpm is lower than the usual high-speed fans (in the 10'000 ... 15'000 RPM range sometimes), and the air flow and the air pressure generated is just a fraction of what is intended by the hardware designers, and required to keep the internals reasonable cool. 

 

 

Message 147 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

For the fan, I take responsibility. I would normally check the airflow, and in fact I think the one I used more or less matched the original one - it's just that I wasn't expecting the PCB to supply 5V - did I get the wrong voltage fan or is the PSU supplying 5V when idle to keep the noise down? In other words: any chance the PCB will ramp up to 12V when the components warm up?

 

Re. the BTRFS logs: I'm not sure I follow.

Message 148 of 191
schumaku
Guru

Re: ReadyNAS Pro 6 crashed again


@tony359 wrote:

Re. the BTRFS logs: I'm not sure I follow.


Plenty of reasons, why community admins restrict the size for attachments, ad why txt files are prohibited (by a simple test of the file extension). Just as in the case of the # dmesg Kernel output, select parts are commonly sufficient. If support engineers need the full data, they will ask for providing the essential parts or for sake the complete log sometimes, by using some cloud systems for temporary usage. It's not worth handling this amount of data on a community forum,

Message 149 of 191
tony359
Apprentice

Re: ReadyNAS Pro 6 crashed again

Gotcha. 

 

But I am not a Linux engineer so how do I know which part to post? 🙂

On other forums I like that when CODE is added, it ends up in a window with scroll bars. Makes reading much easier.

 

Anyways, I hope someone can figure something up from those logs. 🙂

Message 150 of 191
Top Contributors
Discussion stats
  • 190 replies
  • 5164 views
  • 25 kudos
  • 7 in conversation
Announcements