- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I just installed the new upgrade 6.7.1 on my RN214 and got the confusing message that my volumes can`t be loaded.
It`s the same problem others have:
ReadyNAS 516 just upgraded to 6.7.1 and data volume has not loaded
I don`t know what to do.
Best regards
Kalle
Solved! Go to Solution.
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You could perhaps try ReadyNASOS 6.7.3-T283 (Beta 1)
There's a fix in there for one possible reason why a volume may not mount. I wonder if you ran into that.
All Replies
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Maybe you want to upvote this "idea": https://community.netgear.com/t5/Idea-Exchange-for-ReadyNAS/Change-the-incredibly-confusing-error-me...
Download the logs from the GUI, and look for md127 in dmesg.log.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Hi jak0lantash,
thanks for your reply.
I have downloaded the logs and in the dmesg.log bu there were no entries like "md127".
My Netgear has four disks und is running a RAID 5.
Here a snippet from the volume.log:
RAID data-0: Device: sda3 HostID: 119c1bce UUID: 632ff5fd-6534-2524-9c97-98d780e47e94 Size: 23413000704 Component size: 7804333680 Level: 5 State: inactive Action: idle Flags 0x0 Members: 4 Pool: data-0 Disks: sda: HostID: 119c1bce Flags: 0x0 Size: 7814037168 (3726 GB) Free: 4062 Controller 0 Channel: 0 Model: WDC WD40EFRX-68WT0N0 Serial: WD-WCC4E2XEK9VE Firmware: 82.00A82 RPM: 5400 SMART data Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 25 Start/Stop Count: 6678 Power-On Hours: 2962 Power Cycle Count: 387 Load Cycle Count: 6647 Latest Self Test: Passed sdb: HostID: 119c1bce Flags: 0x0 Size: 7814037168 (3726 GB) Free: 4062 Controller 0 Channel: 1 Model: WDC WD40EFRX-68WT0N0 Serial: WD-WCC4E4CZPUVA Firmware: 82.00A82 RPM: 5400 SMART data Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 25 Start/Stop Count: 6745 Power-On Hours: 2961 Power Cycle Count: 386 Load Cycle Count: 6715 Latest Self Test: Passed sdc: HostID: 119c1bce Flags: 0x0 Size: 7814037168 (3726 GB) Free: 4062 Controller 0 Channel: 2 Model: WDC WD40EFRX-68WT0N0 Serial: WD-WCC4E3XCNPFF Firmware: 82.00A82 RPM: 5400 SMART data Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 25 Start/Stop Count: 6716 Power-On Hours: 2961 Power Cycle Count: 383 Load Cycle Count: 6688 Latest Self Test: Passed sdd: HostID: 119c1bce Flags: 0x0 Size: 7814037168 (3726 GB) Free: 4062 Controller 0 Channel: 3 Model: WDC WD40EFRX-68WT0N0 Serial: WD-WCC4E3JR81HF Firmware: 82.00A82 RPM: 5400 SMART data Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 25 Start/Stop Count: 6503 Power-On Hours: 2961 Power Cycle Count: 383 Load Cycle Count: 6476 Latest Self Test: Passed
Best regards
Kalle
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Hi,
I am not an expert with this but I guess sda is down and that's why it can't start.
Here is the snipped out of the dmesg log.
scsi 0:0:0:0: Direct-Access ATA WDC WD40EFRX-68W 0A82 PQ: 0 ANSI: 5 [Sun May 14 12:31:13 2017] sd 0:0:0:0: [sda] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB) [Sun May 14 12:31:13 2017] sd 0:0:0:0: [sda] 4096-byte physical blocks [Sun May 14 12:31:13 2017] sd 0:0:0:0: [sda] Write Protect is off [Sun May 14 12:31:13 2017] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [Sun May 14 12:31:13 2017] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [Sun May 14 12:31:13 2017] ata5: SATA link down (SStatus 0 SControl 300) [Sun May 14 12:31:13 2017] sda: sda1 sda2 sda3 [Sun May 14 12:31:13 2017] sd 0:0:0:0: [sda] Attached SCSI disk [Sun May 14 12:31:13 2017] pci 0000:00:04.1: [1c36:8011] type 00 class 0x100000 [Sun May 14 12:31:13 2017] al_crypto 0000:00:04.1: enabling device (0400 -> 0402) [Sun May 14 12:31:13 2017] al_crypto 0000:00:04.1: al_crypto_pci_probe: Skipping alg/hash initialization, no allocated channels [Sun May 14 12:31:13 2017] al_crypto 0000:00:04.1: crc/csum algorithms registered in /proc/crypto [Sun May 14 12:31:13 2017] al_crypto 0000:00:04.0: algorithms registered in /proc/crypto [Sun May 14 12:31:13 2017] al_crypto 0000:00:04.0: hash algorithms registered in /proc/crypto [Sun May 14 12:31:13 2017] al_crypto 0000:00:04.0: al_crypto_pci_probe: Skipping crc initialization, no allocated channels [Sun May 14 12:31:13 2017] usbcore: registered new interface driver usbhid [Sun May 14 12:31:13 2017] usbhid: USB HID core driver [Sun May 14 12:31:13 2017] ip_tables: (C) 2000-2006 Netfilter Core Team [Sun May 14 12:31:13 2017] NET: Registered protocol family 10 [Sun May 14 12:31:13 2017] NET: Registered protocol family 17 [Sun May 14 12:31:13 2017] 8021q: 802.1Q VLAN Support v1.8 [Sun May 14 12:31:13 2017] Key type dns_resolver registered [Sun May 14 12:31:13 2017] Registering SWP/SWPB emulation handler [Sun May 14 12:31:13 2017] registered taskstats version 1 [Sun May 14 12:31:13 2017] BTRFS: using crc32c-generic for crc32c [Sun May 14 12:31:13 2017] Btrfs loaded [Sun May 14 12:31:13 2017] input: gpio_keys as /devices/platform/gpio_keys/input/input0 [Sun May 14 12:31:13 2017] rtc-ds1307 0-0068: setting system clock to 2017-05-14 10:31:13 UTC (1494757873) [Sun May 14 12:31:14 2017] usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd [Sun May 14 12:31:14 2017] usb-storage 2-3:1.0: USB Mass Storage device detected [Sun May 14 12:31:14 2017] scsi host6: usb-storage 2-3:1.0 [Sun May 14 12:31:15 2017] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [Sun May 14 12:31:16 2017] ata4.00: ATA-9: WDC WD40EFRX-68WT0N0, 82.00A82, max UDMA/133 [Sun May 14 12:31:16 2017] ata4.00: 7814037168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA [Sun May 14 12:31:16 2017] ata4.00: configured for UDMA/133 [Sun May 14 12:31:16 2017] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [Sun May 14 12:31:16 2017] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [Sun May 14 12:31:16 2017] ata2.00: ATA-9: WDC WD40EFRX-68WT0N0, 82.00A82, max UDMA/133 [Sun May 14 12:31:16 2017] ata2.00: 7814037168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA [Sun May 14 12:31:16 2017] ata2.00: configured for UDMA/133 [Sun May 14 12:31:16 2017] scsi 1:0:0:0: Direct-Access ATA WDC WD40EFRX-68W 0A82 PQ: 0 ANSI: 5 [Sun May 14 12:31:16 2017] sd 1:0:0:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB) [Sun May 14 12:31:16 2017] sd 1:0:0:0: [sdb] 4096-byte physical blocks [Sun May 14 12:31:16 2017] sd 1:0:0:0: [sdb] Write Protect is off [Sun May 14 12:31:16 2017] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 [Sun May 14 12:31:16 2017] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [Sun May 14 12:31:16 2017] ata3.00: ATA-9: WDC WD40EFRX-68WT0N0, 82.00A82, max UDMA/133 [Sun May 14 12:31:16 2017] ata3.00: 7814037168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA [Sun May 14 12:31:16 2017] ata3.00: configured for UDMA/133 [Sun May 14 12:31:16 2017] scsi 2:0:0:0: Direct-Access ATA WDC WD40EFRX-68W 0A82 PQ: 0 ANSI: 5 [Sun May 14 12:31:16 2017] sd 2:0:0:0: [sdc] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB) [Sun May 14 12:31:16 2017] sd 2:0:0:0: [sdc] 4096-byte physical blocks [Sun May 14 12:31:16 2017] sd 2:0:0:0: [sdc] Write Protect is off [Sun May 14 12:31:16 2017] sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00 [Sun May 14 12:31:16 2017] sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [Sun May 14 12:31:16 2017] scsi 3:0:0:0: Direct-Access ATA WDC WD40EFRX-68W 0A82 PQ: 0 ANSI: 5 [Sun May 14 12:31:16 2017] sd 3:0:0:0: [sdd] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB) [Sun May 14 12:31:16 2017] sd 3:0:0:0: [sdd] 4096-byte physical blocks [Sun May 14 12:31:16 2017] sd 3:0:0:0: [sdd] Write Protect is off [Sun May 14 12:31:16 2017] sd 3:0:0:0: [sdd] Mode Sense: 00 3a 00 00 [Sun May 14 12:31:16 2017] sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [Sun May 14 12:31:16 2017] sdb: sdb1 sdb2 sdb3 [Sun May 14 12:31:16 2017] sd 1:0:0:0: [sdb] Attached SCSI disk [Sun May 14 12:31:16 2017] sdd: sdd1 sdd2 sdd3 [Sun May 14 12:31:16 2017] sd 3:0:0:0: [sdd] Attached SCSI disk [Sun May 14 12:31:16 2017] sdc: sdc1 sdc2 sdc3 [Sun May 14 12:31:16 2017] sd 2:0:0:0: [sdc] Attached SCSI disk [Sun May 14 12:31:16 2017] ata6: SATA link down (SStatus 0 SControl 300)
Best regards
Kalle
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
You should have further mentions of sda3, and md127 lower than that in dmesg.log.
If you don't, at the end volume.log, there is a section "=== df -h ===", check the line:
/dev/md0 4.0G 517M 3.3G 14% /
Is it 100% full?
You can also search systemd-journal.log and mdstat.log for mentions of md127.
How many HDDs do you have in the NAS?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Hallo,
there are no further mentions of sda3, and md127 in the dmesg.log. I´ve cutted it all out last time.
As you requested here are the outputs of:
~ volume.log ~
/dev/md0 3.7G 1.6G 1.9G 46% /
~ volume.log ~
"md127" can be found 10 times in the file. This is the whole cutout with all of them.
-- Logs begin at Sat 2017-03-25 22:09:02 CET, end at Sun 2017-05-14 12:33:55 CEST. -- May 12 22:00:37 ReadyNAS systemd[1]: Starting Create Static Device Nodes in /dev... May 12 22:00:37 ReadyNAS systemd[1]: Mounting FUSE Control File System... May 12 22:00:37 ReadyNAS systemd[1]: Mounting Configuration File System... May 12 22:00:37 ReadyNAS systemd[1]: Starting Apply Kernel Variables... May 12 22:00:37 ReadyNAS systemd[1]: Mounted FUSE Control File System. May 12 22:00:37 ReadyNAS systemd[1]: Mounted Configuration File System. May 12 22:00:37 ReadyNAS systemd[1]: Started hwclock.service. May 12 22:00:37 ReadyNAS systemd[1]: Started Apply Kernel Variables. May 12 22:00:37 ReadyNAS systemd[1]: Reached target System Time Synchronized. May 12 22:00:37 ReadyNAS systemd[1]: Starting Remount Root and Kernel File Systems... May 12 22:00:37 ReadyNAS systemd[1]: Started Create Static Device Nodes in /dev. May 12 22:00:37 ReadyNAS systemd[1]: Started Remount Root and Kernel File Systems. May 12 22:00:37 ReadyNAS systemd[1]: Starting Rebuild Hardware Database... May 12 22:00:37 ReadyNAS systemd[1]: Starting Load/Save Random Seed... May 12 22:00:37 ReadyNAS systemd-journald[1018]: Journal started May 12 22:00:37 ReadyNAS systemd[1]: Starting udev Kernel Device Manager... May 12 22:00:37 ReadyNAS systemd[1]: Started Journal Service. May 12 22:00:37 ReadyNAS systemd[1]: Started Load/Save Random Seed. May 12 22:00:37 ReadyNAS systemd[1]: Starting Flush Journal to Persistent Storage... May 12 22:00:38 ReadyNAS systemd[1]: Started Flush Journal to Persistent Storage. May 12 22:00:38 ReadyNAS kernel: md: md127 stopped. May 12 22:00:38 ReadyNAS kernel: md: bind<sdb3> May 12 22:00:38 ReadyNAS kernel: md: bind<sdc3> May 12 22:00:38 ReadyNAS kernel: md: bind<sdd3> May 12 22:00:38 ReadyNAS kernel: md: bind<sda3> May 12 22:00:38 ReadyNAS kernel: md/raid:md127: device sda3 operational as raid disk 0 May 12 22:00:38 ReadyNAS kernel: md/raid:md127: device sdd3 operational as raid disk 3 May 12 22:00:38 ReadyNAS kernel: md/raid:md127: device sdc3 operational as raid disk 2 May 12 22:00:38 ReadyNAS kernel: md/raid:md127: device sdb3 operational as raid disk 1 May 12 22:00:38 ReadyNAS kernel: md/raid:md127: allocated 4274kB May 12 22:00:38 ReadyNAS kernel: md/raid:md127: raid level 5 active with 4 out of 4 devices, algorithm 2 May 12 22:00:38 ReadyNAS kernel: RAID conf printout: May 12 22:00:38 ReadyNAS kernel: --- level:5 rd:4 wd:4 May 12 22:00:38 ReadyNAS kernel: disk 0, o:1, dev:sda3 May 12 22:00:38 ReadyNAS kernel: disk 1, o:1, dev:sdb3 May 12 22:00:38 ReadyNAS kernel: disk 2, o:1, dev:sdc3 May 12 22:00:38 ReadyNAS kernel: disk 3, o:1, dev:sdd3 May 12 22:00:38 ReadyNAS kernel: md127: detected capacity change from 0 to 11987456360448 May 12 22:00:38 ReadyNAS systemd[1]: Started udev Kernel Device Manager. May 12 22:00:38 ReadyNAS start_raids[1004]: mdadm: /dev/md/data-0 has been started with 4 drives. May 12 22:00:38 ReadyNAS systemd[1]: Found device /dev/md1. May 12 22:00:38 ReadyNAS systemd[1]: Activating swap md1... May 12 22:00:38 ReadyNAS kernel: Adding 1046524k swap on /dev/md1. Priority:-1 extents:1 across:1046524k May 12 22:00:38 ReadyNAS systemd[1]: Activated swap md1. May 12 22:00:38 ReadyNAS systemd[1]: Started Rebuild Hardware Database. May 12 22:00:39 ReadyNAS kernel: BTRFS: device label 119c1bce:data devid 1 transid 13920 /dev/md127 May 12 22:00:39 ReadyNAS systemd[1]: Found device /dev/disk/by-label/119c1bce:data. May 12 22:00:39 ReadyNAS systemd[1]: Started MD arrays. May 12 22:00:39 ReadyNAS systemd[1]: Starting udev Coldplug all Devices... May 12 22:00:39 ReadyNAS systemd[1]: Reached target Local File Systems (Pre). May 12 22:00:39 ReadyNAS systemd[1]: Mounting /data... May 12 22:00:39 ReadyNAS systemd[1]: Reached target Swap. May 12 22:00:39 ReadyNAS systemd[1]: Started udev Coldplug all Devices. May 12 22:00:39 ReadyNAS systemd[1]: Starting udev Wait for Complete Device Initialization... May 12 22:00:39 ReadyNAS systemd[1]: Found device /dev/ttyS0. May 12 22:00:40 ReadyNAS systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. May 12 22:00:40 ReadyNAS systemd-udevd[1614]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:40 ReadyNAS systemd-udevd[1390]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:40 ReadyNAS disk_event_handler[1970]: disk event handler called for add May 12 22:00:40 ReadyNAS disk_event_handler[1972]: disk event handler called for add May 12 22:00:40 ReadyNAS systemd-udevd[1677]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:40 ReadyNAS disk_event_handler[1988]: disk event handler called for add May 12 22:00:40 ReadyNAS disk_event_handler[1993]: disk event handler called for add May 12 22:00:40 ReadyNAS disk_event_handler[1970]: Sending disk add event for /dev/sdb to readynasd. May 12 22:00:40 ReadyNAS disk_event_handler[1972]: Sending disk add event for /dev/sdc to readynasd. May 12 22:00:40 ReadyNAS systemd-udevd[1682]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:40 ReadyNAS disk_event_handler[2012]: disk event handler called for add May 12 22:00:41 ReadyNAS disk_event_handler[2012]: Sending disk add event for /dev/sda to readynasd. May 12 22:00:41 ReadyNAS disk_event_handler[1993]: Sending disk add event for /dev/sdd to readynasd. May 12 22:00:41 ReadyNAS disk_event_handler[1988]: Sending disk add event for /dev/sde to readynasd. May 12 22:00:41 ReadyNAS systemd-udevd[1587]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:41 ReadyNAS systemd-udevd[1392]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:41 ReadyNAS systemd-udevd[1391]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:41 ReadyNAS systemd-udevd[1690]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:41 ReadyNAS systemd-udevd[1614]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:41 ReadyNAS systemd-udevd[1390]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:41 ReadyNAS systemd-udevd[2091]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:41 ReadyNAS systemd-udevd[2092]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:41 ReadyNAS systemd-udevd[2093]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:41 ReadyNAS systemd-udevd[2095]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:41 ReadyNAS systemd-udevd[2096]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:41 ReadyNAS systemd-udevd[2094]: Process '/lib/udev/hdparm' failed with exit code 5. May 12 22:00:41 ReadyNAS systemd[1]: Started udev Wait for Complete Device Initialization. May 12 22:00:43 ReadyNAS systemd[1]: Mounted /data. May 12 22:00:43 ReadyNAS systemd[1]: Reached target Local File Systems. May 12 22:00:43 ReadyNAS systemd[1]: Starting Create Volatile Files and Directories... May 12 22:00:43 ReadyNAS systemd[1]: Starting LSB: Add to mtab the entry for /dev.... May 12 22:00:43 ReadyNAS systemd[1]: Mounting Apps Directory... May 12 22:00:43 ReadyNAS systemd[1]: Mounting Home Directory... May 12 22:00:43 ReadyNAS systemd[1]: Mounted Apps Directory. May 12 22:00:43 ReadyNAS systemd[1]: Mounted Home Directory. May 12 22:00:43 ReadyNAS systemd[1]: Started LSB: Add to mtab the entry for /dev.. May 12 22:00:43 ReadyNAS systemd[1]: Started Create Volatile Files and Directories. May 12 22:00:43 ReadyNAS systemd[1]: Starting Update UTMP about System Boot/Shutdown... May 12 22:00:43 ReadyNAS systemd[1]: Started Update UTMP about System Boot/Shutdown. May 12 22:00:43 ReadyNAS systemd[1]: Reached target System Initialization. May 12 22:00:43 ReadyNAS systemd[1]: Started Timer for ReadyNAS Update Service. May 12 22:00:43 ReadyNAS systemd[1]: Listening on D-Bus System Message Bus Socket. May 12 22:00:43 ReadyNAS systemd[1]: Started Timer for Debian Package List Updater. May 12 22:00:43 ReadyNAS systemd[1]: Started Daily Cleanup of Temporary Directories. May 12 22:00:43 ReadyNAS systemd[1]: Started Anti-Virus Definition Update Timer. May 12 22:00:43 ReadyNAS systemd[1]: Listening on RPCbind Server Activation Socket. May 12 22:00:44 ReadyNAS systemd[1]: Started Log Truncate Timer. May 12 22:00:44 ReadyNAS systemd[1]: Reached target Timers. May 12 22:00:44 ReadyNAS systemd[1]: Listening on Avahi mDNS/DNS-SD Stack Activation Socket. May 12 22:00:44 ReadyNAS systemd[1]: Reached target Sockets. May 12 22:00:44 ReadyNAS systemd[1]: Reached target Basic System. May 12 22:00:44 ReadyNAS systemd[1]: Started Regular background program processing daemon. May 12 22:00:44 ReadyNAS cron[2179]: (CRON) INFO (pidfile fd = 3) May 12 22:00:44 ReadyNAS systemd[1]: Started MD repair service. May 12 22:00:44 ReadyNAS systemd[1]: Starting Login Service... May 12 22:00:44 ReadyNAS cron[2179]: (CRON) INFO (Running @reboot jobs) May 12 22:00:44 ReadyNAS systemd[1]: Starting Network bonding... May 12 22:00:44 ReadyNAS systemd[1]: Started WSD/LLMNR Discovery/Name Service Daemon. May 12 22:00:44 ReadyNAS wsdd2[2184]: starting. May 12 22:00:44 ReadyNAS systemd-udevd[2197]: Could not generate persistent MAC address for bond0: No such file or directory May 12 22:00:44 ReadyNAS kernel: bonding: bond0 is being created... May 12 22:00:44 ReadyNAS systemd[1]: Starting LSB: Startup script for the Logitech Media Server... May 12 22:00:44 ReadyNAS kernel: IPv6: ADDRCONF(NETDEV_UP): bond0: link is not ready May 12 22:00:44 ReadyNAS kernel: 8021q: adding VLAN 0 to HW filter on device bond0 May 12 22:00:44 ReadyNAS kernel: bond0: Setting MII monitoring interval to 100 May 12 22:00:44 ReadyNAS kernel: bond0: Setting xmit hash policy to layer2+3 (2) May 12 22:00:44 ReadyNAS kernel: al_eth 0000:00:03.0 eth1: al_eth_down May 12 22:00:44 ReadyNAS kernel: [eth rx] warn: dma state didn't change to Disable May 12 22:00:44 ReadyNAS kernel: [eth rx] warn: failed to change state, error -110 May 12 22:00:44 ReadyNAS systemd[1]: Started MD monitoring service. May 12 22:00:44 ReadyNAS kernel: bond0: Adding slave eth1 May 12 22:00:44 ReadyNAS kernel: configured MAC to RGMII mode: May 12 22:00:44 ReadyNAS kernel: al_eth 0000:00:03.0 eth1: using MSI-X per Queue interrupt mode May 12 22:00:44 ReadyNAS kernel: libphy: al mdio bus: probed May 12 22:00:44 ReadyNAS systemd[1]: Started D-Bus System Message Bus. May 12 22:00:44 ReadyNAS logitechmediaserver[2226]: Making sure that Logitech Media Server is not running first: No process in pidfile '/var/run/logitechmediaserver.pid' found running; none killed. May 12 22:00:44 ReadyNAS logitechmediaserver[2226]: Starting Logitech Media Server. May 12 22:00:44 ReadyNAS mdadm[2233]: NewArray event detected on md device /dev/md0 May 12 22:00:44 ReadyNAS kernel: al_eth 0000:00:03.0 eth1: phy[5]: device 18:05, driver Atheros 8035 ethernet May 12 22:00:44 ReadyNAS kernel: al_eth 0000:00:03.0 eth1: phy[5]:supported 2ef adv 2ef May 12 22:00:44 ReadyNAS kernel: bond0: Enslaving eth1 as a backup interface with a down link May 12 22:00:44 ReadyNAS kernel: al_eth 0000:00:01.0 eth0: al_eth_down May 12 22:00:44 ReadyNAS kernel: [eth rx] warn: dma state didn't change to Disable May 12 22:00:44 ReadyNAS kernel: [eth rx] warn: failed to change state, error -110 May 12 22:00:44 ReadyNAS kernel: bond0: Adding slave eth0 May 12 22:00:44 ReadyNAS kernel: configured MAC to RGMII mode: May 12 22:00:44 ReadyNAS kernel: al_eth 0000:00:01.0 eth0: using MSI-X per Queue interrupt mode May 12 22:00:44 ReadyNAS kernel: libphy: al mdio bus: probed May 12 22:00:44 ReadyNAS mdadm[2233]: NewArray event detected on md device /dev/md1 May 12 22:00:44 ReadyNAS mdadm[2233]: NewArray event detected on md device /dev/md127
~ mdstat.log ~
"md127" can´t be found in the file.
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid6 sda2[0] sdc2[3] sdb2[2] sdd2[1] 1046528 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] md0 : active raid1 sda1[4] sdd1[3] sdc1[2] sdb1[5] 4190208 blocks super 1.2 [4/4] [UUUU] unused devices: <none> /dev/md/0: Version : 1.2 Creation Time : Sun Apr 3 06:27:48 2016 Raid Level : raid1 Array Size : 4190208 (4.00 GiB 4.29 GB) Used Dev Size : 4190208 (4.00 GiB 4.29 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun May 14 12:33:50 2017 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Name : 119c1bce:0 (local to host 119c1bce) UUID : 2495045b:2aa5bd3c:8cff9bb5:de4e769c Events : 367 Number Major Minor RaidDevice State 4 8 1 0 active sync /dev/sda1 5 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 /dev/md/1: Version : 1.2 Creation Time : Sat Apr 30 23:31:42 2016 Raid Level : raid6 Array Size : 1046528 (1022.00 MiB 1071.64 MB) Used Dev Size : 523264 (511.00 MiB 535.82 MB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 8 01:08:11 2017 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : 119c1bce:1 (local to host 119c1bce) UUID : 2863873a:78baa761:890ee932:2551648e Events : 19 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 50 1 active sync /dev/sdd2 2 8 18 2 active sync /dev/sdb2 3 8 34 3 active sync /dev/sdc2
It seems that the RAID is ok, am I right?
Best regards
Kalle
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Somebody will have to look at the whole log archive, because it doesn't make any sense to me. Like @mdgm-ntgr
You're not in a full root situation so the logs should be consistent, and they're not.
You don't find mention of md127 in dmesg.log, yet the logs I was aiming for are in systemd-journal.log (you said volume.log, but I believe you meant systemd-journal.log).
May 12 22:00:38 ReadyNAS kernel: md/raid:md127: device sda3 operational as raid disk 0 May 12 22:00:38 ReadyNAS kernel: md/raid:md127: device sdd3 operational as raid disk 3 May 12 22:00:38 ReadyNAS kernel: md/raid:md127: device sdc3 operational as raid disk 2 May 12 22:00:38 ReadyNAS kernel: md/raid:md127: device sdb3 operational as raid disk 1 May 12 22:00:38 ReadyNAS kernel: md/raid:md127: allocated 4274kB May 12 22:00:38 ReadyNAS kernel: md/raid:md127: raid level 5 active with 4 out of 4 devices, algorithm 2 May 12 22:00:38 ReadyNAS kernel: RAID conf printout: May 12 22:00:38 ReadyNAS kernel: --- level:5 rd:4 wd:4 May 12 22:00:38 ReadyNAS kernel: disk 0, o:1, dev:sda3 May 12 22:00:38 ReadyNAS kernel: disk 1, o:1, dev:sdb3 May 12 22:00:38 ReadyNAS kernel: disk 2, o:1, dev:sdc3 May 12 22:00:38 ReadyNAS kernel: disk 3, o:1, dev:sdd3
May 12 22:00:38 ReadyNAS kernel: md127: detected capacity change from 0 to 11987456360448
May 12 22:00:38 ReadyNAS start_raids[1004]: mdadm: /dev/md/data-0 has been started with 4 drives.
May 12 22:00:39 ReadyNAS kernel: BTRFS: device label 119c1bce:data devid 1 transid 13920 /dev/md127
May 12 22:00:43 ReadyNAS systemd[1]: Mounted /data.
Yet, you don't see md127 in mdstat.log. You really should see it there. "md127", "data-0", that's your data volume. If systemd-journal.log shows that md127 is started, then it should be in mdstat.log.
Also in the logs, there is lots of:
systemd-udevd[1614]: Process '/lib/udev/hdparm' failed with exit code 5.
But I'm not sure what that means.
Your RAID array seems OK. The issue looks software to me. Or maybe an erroring HDD.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Hi,
you are right I meant systemd-journal.log not volume.log, my bad.
I´ve downloaded the logs a second time (as user "admin" on the webfrontend) and I can´t find "md127" or "data-0" in mdstat.log nor in the dmesg.log.
i don´t think it´s an erroring HDD, because right before the update all was O.K.
I will write mdgm and ask him if he can help me.
Thank you for your help.
Best regards
Patrice
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
This seems to be an increasing issue - see my thread here to: https://community.netgear.com/t5/New-to-ReadyNAS/ReadyNAS-516-just-upgraded-to-6-7-1-and-data-volume...
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Hi,
I mentioned your post in my first post.
Best regards
Kalle
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Please send in your logs (see the Sending Logs link in my sig).
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Hi,
I will do this right away. Thank you.
Best regards
Kalle
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
As I said here, https://community.netgear.com/t5/Using-your-ReadyNAS/Remove-inactive-volumes-to-use-the-disk-Disk-1-..., I doubt that you will see a lot of people here talking about their dead toaster. It's a storage forum, so yes, there is a lot of people talking about storage issues.
In recent threads, there is a bit of everything, full root, dead or erroring disks, broken RAID array, BTRFS corruption, etc. Any of these can lead to an "inactive volume" in red in the GUI. I help on a fair bit of dead volumes threads (the ones where not all the logs are necessary as I can't access the logs sent to mdgm), and I haven't seen any pattern so far.
While I believe that the system could and should be more detailled about what's causing the volume to be "inactive", https://community.netgear.com/t5/Idea-Exchange-for-ReadyNAS/Change-the-incredibly-confusing-error-me..., I don't see any proof, or concerning pattern, of an issue in a recent Firmware leading to particularly high volume failure rate.
Of course, that's only my opinion.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Hi,
I can only repeat myself: "I just installed the new upgrade 6.7.1 on my RN214 and got the confusing message that my volumes can`t be loaded." So there must be an issue with the new firmware and it`s not only me.
While I believe that the system could and should be more detailled about what's causing the volume to be "inactive"
That's a good idea.
Best regards
Kalle
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Let's wait for mdgm's feedback.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
What do you mean?
I don't want to argue with you I am only curios.
Best regards
Kalle
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
I'm not arguing either
There are multiple reasons for the volume to appear "inactive", some of the most common:
- Full root: the volume dedicated to the system is completely full, preventing it from booting properly. In your case, it could be related to the process of updating the F/W (any F/W). Or some add-on that filled the volume and the consequence appeared after the reboot, as part of the update process.
- Corrupt system: often due to the previous condition, but also rarely occur without full root.
- Multiple disk failure: your drives could have been slowly dying but seemlessly "working", or suddenly died, or a mix. Updating a firmware implies writing to the drives which can trigger failure(s).
- Erroring disk: a disk can be flapping and cause the NAS to be enable to mount the volume.
- BTRFS corruption: any operation on the volume, even mount/unmount can lead to filesystem corruption if an unexpected event occur (at FS or HW level). It happens.
Etc.
It's a shame that the NAS doesn't show why is the volume inactive, in the GUI. It would reasonably easy to integrate it and really useful for anyone running into the issue, hence the post in the "idea" section.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
In a nutshell, loading the new OS version may have been the "last straw" in a series of events that caused the problem. So the new OS, in and of itself, may not be to blame. The new OS version itself may be a contributer if it does not tolerate a pre-exisitng problem as well as the previous version, or the simple process of doing an update (which uses space in the OS partition) may have been all it took.
There are also a couple posts in the idea exchange about better controlling OS partition problems (if that's what your issue is) that you may also want to up-vote.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Not sure from the logs why it's not mounting. The logs show:
May 14 19:55:19 ReadyNAS start_raids[1041]: mdadm: device data-0 exists but is not an md array.
The disks appear to be healthy from looking at smart_history.log, but the RAID hasn't started.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Not sure why it hasn't started. Would need a remote look to get more of an idea.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Hi,
just did it.
Best regards
Kalle
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Looks like you did some stuff via SSH e.g. a dist-upgrade using apt-get. Why did you do that?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Betreff: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Hi,
yes I did it several months ago. I totaly forget about that. I did it because I do that on my other linux maschines, simple as that. That was not the smartest move. After this dist-upgrade I had big problems with the NAS and I had to do a Firmware reset. Seemingly there are also leftovers from this action. S h i t !
Best regards
Kalle
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: After upgrading to 6.7.1 on RN214 no volumes can be loaded
Hi,
thank you for your precise explanation jak0lantash. Just one day ago I didn`t thought that there were the slightest possibillity that your reasons can be the reason for my problem but now it`s sounds like I **bleep**ed up by my self, due to a dist-upgrade. 😞
Best regards
Kalle