× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

cerjzc
Tutor

Pro 6 won't boot up after going from 6.7.1 to 6.7.3

Upgraded my pro 2 from 6.7.1 to 6.7.3 beta without any issues, but tried my Pro 6 a bit later and it appears to have not gone well.  It showed upgrading FW on the screen for a long time and from the webpage it appeared to show it was still on 6.7.1 but it would finally time out and not let me in.  After about 30 minutes I tried booting it again and it just goes to retrying startup at around 77 or 83 percent and sits for a long period.  I tried an os reinstall from the boot menu and it also said updating FW, but it appears to be back in the same position.  I have had issues in the last few months after some upgrades and some power issues when it didn't shut down cleanly.  Not sure if it is due for a factory reset or maybe something is low on space.  I can boot it up in Maint mode if there is someone that can maybe take a quick look if not I will probably see about trying a factory reset and restore the data/config.

 

Thanks,

Jeff

Model: ReadyNAS RNDP6310|ReadyNAS Pro 6
Message 1 of 13

Accepted Solutions
mdgm-ntgr
NETGEAR Employee Retired

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

Yes you had the same problem. It should now be fixed.

 

Edit: 

If you have updated to 6.7.3 and ran into this issue please try USB Boot Recovery (note for those using RAIDiator-x86 systems such as the Pro 6 you'll need to use the RAIDiator-x86 4.2.x USB Boot Recovery tool with the OS6 firmware renamed to RAIDiator-x86-something) with ReadyNAS OS 6.7.4 which is now available!

 

If you have not yet upgraded please upgrade to 6.7.4 rather than 6.7.3. If your system has already been fixed I would still suggest updating to 6.7.4 the normal way using the web admin GUI.

View solution in original post

Message 5 of 13

All Replies
cerjzc
Tutor

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

Just as an update to this.  I did bring it up in Debug mode and it is displaying "no ip address" on the display, but in Raidar 6.3 it does show up and that it is in Tech Support mode.  It also shows it is on 6.7.3-T283 and does show the IP 192.168.168.168 for both IPs.  Not sure if that is normal or not.

 

Thanks

Message 2 of 13
cerjzc
Tutor

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

After a bit more digging via ssh I think I might be having the same problem as noted here:

 

https://community.netgear.com/t5/Using-your-ReadyNAS/RN314-FW-6-7-1-root-partition-full-how-to-fix/t...

 

Filesystem 1K-blocks Used Available Use% Mounted on
udev 10240 4 10236 1% /dev
/dev/md0 4190208 1144380 0 100% /
tmpfs 2018872 8 2018864 1% /dev/shm
tmpfs 2018872 16964 2001908 1% /run
tmpfs 1009436 8 1009428 1% /run/lock
tmpfs 2018872 0 2018872 0% /sys/fs/cgroup
/dev/md126 11696688832 5317807976 6377199192 46% /data
/dev/md126 11696688832 5317807976 6377199192 46% /apps
/dev/md126 11696688832 5317807976 6377199192 46% /home


root@CalvCoNAS:/mnt/var/cores# btrfs fi df /
Data, single: total=3.57GiB, used=580.03MiB
System, DUP: total=8.00MiB, used=16.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, DUP: total=204.56MiB, used=12.75MiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=512.00MiB, used=320.25MiB

root@CalvCoNAS:/mnt/var/cores# btrfs fi usage /
Overall:
Device size: 4.00GiB
Device allocated: 4.00GiB
Device unallocated: 0.00B
Device missing: 0.00B
Used: 605.56MiB
Free (estimated): 3.00GiB (min: 3.00GiB)
Data ratio: 1.00
Metadata ratio: 1.96
Global reserve: 512.00MiB (used: 320.25MiB)

Data,single: Size:3.57GiB, Used:580.03MiB
/dev/md0 3.57GiB

Metadata,single: Size:8.00MiB, Used:0.00B
/dev/md0 8.00MiB

Metadata,DUP: Size:204.56MiB, Used:12.75MiB
/dev/md0 409.12MiB

System,single: Size:4.00MiB, Used:0.00B
/dev/md0 4.00MiB

System,DUP: Size:8.00MiB, Used:16.00KiB
/dev/md0 16.00MiB

Unallocated:
/dev/md0 0.00B

 

I found there was a /var/cores/core-smdb file, but I'm unable to move or remove anything as I just get the message "No space left on device".  The other post mentions that Skywalker was able to fix the root fs, but it doesn't mention what was done.  

 

I did also try to download the logs, but it failes most likely because of the space issue as well.

 

Thanks,

Jeff

Message 3 of 13
mdgm-ntgr
NETGEAR Employee Retired

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

I see what Skywalker did. I've replied to your PM.

Message 4 of 13
mdgm-ntgr
NETGEAR Employee Retired

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

Yes you had the same problem. It should now be fixed.

 

Edit: 

If you have updated to 6.7.3 and ran into this issue please try USB Boot Recovery (note for those using RAIDiator-x86 systems such as the Pro 6 you'll need to use the RAIDiator-x86 4.2.x USB Boot Recovery tool with the OS6 firmware renamed to RAIDiator-x86-something) with ReadyNAS OS 6.7.4 which is now available!

 

If you have not yet upgraded please upgrade to 6.7.4 rather than 6.7.3. If your system has already been fixed I would still suggest updating to 6.7.4 the normal way using the web admin GUI.

Message 5 of 13
buddy81
Aspirant

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

I have the same problem...need help...what can i do?

Message 6 of 13
goi
Guide
Guide

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

Coming from 6.6.1 to 6.7.3 but same problem, what can i do?

Message 7 of 13
mdgm-ntgr
NETGEAR Employee Retired

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

buddy81, I've sent you a PM. goi, I've replied in your thread and sent you a PM.

Message 8 of 13
goi
Guide
Guide

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

MANY THANKS TO MDGM! You saved my day.

Message 9 of 13
goi
Guide
Guide

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

To avoid further limits on upgrade i decided to migrate the root (md0) volume to a raid 6 (which assumes at least 4 drives working). Because i'll never drive this array with less than 6 drives (or if 2 are failing, with at least 4) - i should be save.

 

Now i have 

/dev/md0:
        Version : 1.2
  Creation Time : Mon Apr 18 22:58:10 2016
     Raid Level : raid6
     Array Size : 16760832 (15.98 GiB 17.16 GB)
  Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Thu May 25 17:20:32 2017
          State : clean 
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : 540ed549:0  (local to host 540ed549)
           UUID : 3201e354:b9fcd222:f4d2fe4d:888dc6f6
         Events : 19025

    Number   Major   Minor   RaidDevice State
      11       8        1        0      active sync   /dev/sda1
      10       8       17        1      active sync   /dev/sdb1
       5       8       33        2      active sync   /dev/sdc1
       4       8       49        3      active sync   /dev/sdd1
       3       8       81        4      active sync   /dev/sdf1
       6       8       97        5      active sync   /dev/sdg1

with 

Filesystem      Size  Used Avail Use% Mounted on
udev             10M  4.0K   10M   1% /dev
/dev/md0         16G  1.2G   15G   8% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  2.6M  3.9G   1% /run
tmpfs           2.0G  1.1M  2.0G   1% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/md127       19T  5.2T   14T  28% /data
Message 10 of 13
TeknoJnky
Hero

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

I'd be interested in doing that, do you mind posting a short tutorial or the commands ?

 

Message 11 of 13
goi
Guide
Guide

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

sure:

First of all:

 

DISCLAIMER: Commands given here are for tutorial reasons only - i give neither an implied nor expressive warranty that these commands will work as intended.

MORE OVER A WARNING: They might destroy the integrity of your data and you are obliged to make a backup before!

 

At first determine of which drives your md0 device is made of:

 

 

# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Apr 18 22:58:10 2016

     Raid Level : raid1

     Array Size : 4190208 (4.00 GiB 4.29 GB)

  Used Dev Size : 4190208 (4.00 GiB 4.29 GB)

   Raid Devices : 6

  Total Devices : 6

    Persistence : Superblock is persistent

 

    Update Time : Thu May 25 18:00:01 2017

          State : clean 

 Active Devices : 6

Working Devices : 6

 Failed Devices : 0

  Spare Devices : 0

 

(...)

 

           Name : 540ed549:0  (local to host 540ed549)

           UUID : 3201e354:b9fcd222:f4d2fe4d:888dc6f6

         Events : 19025

 

    Number   Major   Minor   RaidDevice State

      11       8        1        0      active sync   /dev/sda1

      10       8       17        1      active sync   /dev/sdb1

       5       8       33        2      active sync   /dev/sdc1

       4       8       49        3      active sync   /dev/sdd1

       3       8       81        4      active sync   /dev/sde1

       6       8       97        5      active sync   /dev/sdf1

 

assure that the drive is clean and no extensive i/o is on this drive. Warning: If the drives are in another enumaration (e.g. sda,  sdb, sdc, sdd, sdf, sdg) you need to adapt the commands to the changed drive letters.

 

 

Now mark the 4 drives you don't need yet as "failed"

 

# mdadm /dev/md0 --manage --fail /dev/sdf1

# mdadm /dev/md0 --manage --fail /dev/sde1

# mdadm /dev/md0 --manage --fail /dev/sdd1

# mdadm /dev/md0 --manage --fail /dev/sdc1

 

You can now savely tell the raid manager that those 4 drives aren't necessary any more and not part of this array anymore:

 

# mdadm --grow --backup-file=/data/md0backup/tmpfile --raid-devices=2 /dev/md0

 

Now we remove and add 3 drives to mark them as "spares" and remove the last drive only, because we migrate from RAID1 to RAID5 and afterwards to RAID6.

 

# mdadm /dev/md0 --manage --remove /dev/sdc1 --add /dev/sdc1

# mdadm /dev/md0 --manage --remove /dev/sdd1 --add /dev/sdd1

# mdadm /dev/md0 --manage --remove /dev/sde1 --add /dev/sde1

# mdadm /dev/md0 --manage --remove /dev/sdg1 

 

now we are ready to rumble: Please give a backup file for this operation in case anything goes wrong during the very first part of the migration:

We are now migrating a two disk RAID1 into a 2 disk RAID5 (yes it works this way with mdadm!)

 

# mdadm --grow --backup-file=/data/md0backup/tmpfile --level=5 /dev/md0

 

and wait until the raid system resynched and shows "clean" :

 

# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Apr 18 22:58:10 2016

     Raid Level : raid5

     Array Size : 4190208 (4.00 GiB 4.29 GB)

  Used Dev Size : 4190208 (4.00 GiB 4.29 GB)

   Raid Devices : 2

  Total Devices : 5

    Persistence : Superblock is persistent

 

    Update Time : Thu May 25 18:00:01 2017

          State : clean 

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

 

         Layout : left-symmetric

     Chunk Size : 64K

 

           Name : 540ed549:0  (local to host 540ed549)

           UUID : 3201e354:b9fcd222:f4d2fe4d:888dc6f6

         Events : 19025

 

    Number   Major   Minor   RaidDevice State

      11       8        1        0      active sync   /dev/sda1

      10       8       17        1      active sync   /dev/sdb1

 

You'll get a warning that some data  had to be stored into the given backup file!

Now we extend our RAID5 with the 3 additional spares:

 

# mdadm --grow --backup-file=/data/md0backup/tmpfile --raid-devices=5 /dev/md0

 

please wait until the synchronization succeeds:

 

# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Apr 18 22:58:10 2016

     Raid Level : raid5

     Array Size : 16760832 (15.98 GiB 17.16 GB)

  Used Dev Size : 4190208 (4.00 GiB 4.29 GB)

   Raid Devices : 5

  Total Devices : 5

    Persistence : Superblock is persistent

 

    Update Time : Thu May 25 18:00:01 2017

          State : clean 

 Active Devices : 5

Working Devices : 5

 Failed Devices : 0

  Spare Devices : 0

 

         Layout : left-symmetric

     Chunk Size : 64K

 

           Name : 540ed549:0  (local to host 540ed549)

           UUID : 3201e354:b9fcd222:f4d2fe4d:888dc6f6

         Events : 19025

 

    Number   Major   Minor   RaidDevice State

      11       8        1        0      active sync   /dev/sda1

      10       8       17        1      active sync   /dev/sdb1

       5       8       33        2      active sync   /dev/sdc1

       4       8       49        3      active sync   /dev/sdd1

       3       8       81        4      active sync   /dev/sde1

 

Now we add the last spare to it and make it RAID6:

 

# mdadm /dev/md0 --manage --add /dev/sdf1

# mdadm --grow --backup-file=/data/md0backup/tmpfile --level=6 /dev/md0

Side note: the raid grow to level 6 will automatically adjust the raid-devices parameter to 6.

 

Again please wait until the raid is synchronized and the new layout ist cleanly available:

# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Apr 18 22:58:10 2016

     Raid Level : raid6

     Array Size : 16760832 (15.98 GiB 17.16 GB)

  Used Dev Size : 4190208 (4.00 GiB 4.29 GB)

   Raid Devices : 6

  Total Devices : 6

    Persistence : Superblock is persistent

 

    Update Time : Thu May 25 18:00:01 2017

          State : clean 

 Active Devices : 6

Working Devices : 6

 Failed Devices : 0

  Spare Devices : 0

 

         Layout : left-symmetric

     Chunk Size : 64K

 

           Name : 540ed549:0  (local to host 540ed549)

           UUID : 3201e354:b9fcd222:f4d2fe4d:888dc6f6

         Events : 19025

 

    Number   Major   Minor   RaidDevice State

      11       8        1        0      active sync   /dev/sda1

      10       8       17        1      active sync   /dev/sdb1

       5       8       33        2      active sync   /dev/sdc1

       4       8       49        3      active sync   /dev/sdd1

       3       8       81        4      active sync   /dev/sde1

       6       8       97        5      active sync   /dev/sdf1

 

The last step is to tell the filesystem it has now a different size:

 

# btrfs filesystem resize max /

 

check if it is applied:

 

# df -h

Filesystem      Size  Used Avail Use% Mounted on

udev             10M  4.0K   10M   1% /dev

/dev/md0         16G  1.2G   15G   8% /

tmpfs           3.9G     0  3.9G   0% /dev/shm

tmpfs           3.9G  2.6M  3.9G   1% /run

tmpfs           2.0G  1.1M  2.0G   1% /run/lock

tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup

/dev/md127       19T  5.2T   14T  28% /data

/dev/md127       19T  5.2T   14T  28% /home

/dev/md127       19T  5.2T   14T  28% /apps

 

You are now set!

Message 12 of 13
mdgm-ntgr
NETGEAR Employee Retired

Re: Pro 6 won't boot up after going from 6.7.1 to 6.7.3

If you have updated to 6.7.3 and ran into this issue please try USB Boot Recovery (note for those using RAIDiator-x86 systems such as the Pro 6 you'll need to use the RAIDiator-x86 4.2.x USB Boot Recovery tool with the OS6 firmware renamed to RAIDiator-x86-something) with ReadyNAS OS 6.7.4 which is now available!

 

If you have not yet upgraded please upgrade to 6.7.4 rather than 6.7.3. If your system has already been fixed I would still suggest updating to 6.7.4 the normal way using the web admin GUI.

Message 13 of 13
Top Contributors
Discussion stats
  • 12 replies
  • 8989 views
  • 4 kudos
  • 5 in conversation
Announcements