× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: Having a dense moment here.....

Skeetboy1
Apprentice

Having a dense moment here.....

I have a RN2120 (v6.9.3) with 4 x 1TB drives (RAID5) in and it it reaching capacity.

I have taken delivery of 4 x 4TB drives that have been previously used in a ReadyNAS.

The data on the 4TB drives is NOT required.

From the manual it appears that I ccanot just remove one of the 1TB drives and replace it with a 4TB.

How do I progress? I want to replace all 4 x 1TBs with the 4 x 4TBs.

Thank you.

Model: ReadyNAS-OS6|
Message 1 of 16

Accepted Solutions
Skeetboy1
Apprentice

Re: Having a dense moment here.....

Thank you StephenB.

 

I happen to know where the 4TB drives came from and they are good drives, they were replaced with 10TB drives in a 716X.

 

I am following your instructions in the first full paragraph, and so far spot on. A second data volume appeared, I destroyed it, then am formatting it and re-syncing, estimating 8Hrs.

 

I was struggling to find the right information in the manual under "Previously Formatted Disks". It tells you that you must re-format them, but does not tell you how to do this if you are expanding your system, just how to do it on a new system, or how to migrate the data.

 

The closest to what I'm doing is: "If you try to use previously formatted disks in a system that already contains usable disks, the system does not reformat or use the previously formatted disks. Any data on the previously formatted disks remains intact."

 

So you have very kindly provided the missing link!

View solution in original post

Message 3 of 16

All Replies
StephenB
Guru

Re: Having a dense moment here.....


@Skeetboy1 wrote:

 

From the manual it appears that I ccanot just remove one of the 1TB drives and replace it with a 4TB.

 


Where are you seeing that?

 

If you do a hot-swap without unformatting, you might see an inactive volume on the NAS volume screen.  Destroy that volume if it exists.  Ether way, you then select the new disk from the center graphic on the volume tab, and then format it with the control on the right.  The NAS should then automatically add it to the array.

 

It might be wise to actually test these disks, since they are used.  Along the way, you easily unformat them - which will eliminate any need to format them in the NAS.  You'd do this in a Windows PC (connecting them via either SATA or a USB adapter/dock).  Both Seatools (Seagate) and Lifeguard (WDC) have a destructive write-zeros test.  

 

Whatever path you take, you should process one disk at a time, and wait for the resync to complete before doing the next.  There will be no expansion with the first replacement.  You should see the volume expand to to 6 TB (5.45 TiB) after you finish with the second disk.  If you don't see that, then reboot the NAS at that point, and it should expand.  The two disks after that will expand the space by 3 TB each, so you will end up with 12 TiB (10.9 TiB).

 

Message 2 of 16
Skeetboy1
Apprentice

Re: Having a dense moment here.....

Thank you StephenB.

 

I happen to know where the 4TB drives came from and they are good drives, they were replaced with 10TB drives in a 716X.

 

I am following your instructions in the first full paragraph, and so far spot on. A second data volume appeared, I destroyed it, then am formatting it and re-syncing, estimating 8Hrs.

 

I was struggling to find the right information in the manual under "Previously Formatted Disks". It tells you that you must re-format them, but does not tell you how to do this if you are expanding your system, just how to do it on a new system, or how to migrate the data.

 

The closest to what I'm doing is: "If you try to use previously formatted disks in a system that already contains usable disks, the system does not reformat or use the previously formatted disks. Any data on the previously formatted disks remains intact."

 

So you have very kindly provided the missing link!

Message 3 of 16
StephenB
Guru

Re: Having a dense moment here.....


@Skeetboy1 wrote:

 

The closest to what I'm doing is: "If you try to use previously formatted disks in a system that already contains usable disks, the system does not reformat or use the previously formatted disks. Any data on the previously formatted disks remains intact."

 

I agree that they documented it poorly and created needless confusion here.  The section below only applies to the case when all the disks are pre-formatted, but it (and some similar text in the hardware manuals) implies that you need to do a factory reset in order to add a disk to an existing RAID array.   That's unfortunate.

 

 

http://www.downloads.netgear.com/files/GDC/READYNAS-100/READYNAS_OS_6_SM_EN.pdf wrote:

If you want to use disks that were previously formatted for an operating system other than ReadyNAS OS 6 (for example, Windows, Linux, or previous-generation ReadyNAS), you must reformat the disks.You can reformat the disks by installing them, powering on the system, and performing a factory reset before continuing the configuration

 

Message 4 of 16
Skeetboy1
Apprentice

Re: Having a dense moment here.....

OK, where we are at.

1st disk changed all appears to have changed over OK - See image Capture1

Message 5 of 16
Skeetboy1
Apprentice

Re: Having a dense moment here.....

Second disc change appears to have changed over OK - But new space not being seen by the OS, so ave rebooted as per your instruction. Although the system does get shutdown automatically at 21:00 and restarted at 08:00 anyway.

 See image Capture2.

Message 6 of 16
Skeetboy1
Apprentice

Re: Having a dense moment here.....

System now looks like Capture3.

 

What do you advise?

Message 7 of 16
StephenB
Guru

Re: Having a dense moment here.....

Can you download the log zip file, and post mdstat.log (copy/pasting it into the main post works best).  The "insert code" (</>) control gives the best formatting.

Message 8 of 16
Skeetboy1
Apprentice

Re: Having a dense moment here.....

Have added file in full, as the system keeps coming up with spurious errors when pasting in.

Message 9 of 16
Skeetboy1
Apprentice

Re: Having a dense moment here.....

Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md127 : active raid5 sda3[4] sdd3[3] sdc3[2] sdb3[5]
      2915732352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

md1 : active raid10 sdc2[0] sdb2[3] sda2[2] sdd2[1]
      1046528 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      
md0 : active raid1 sdc1[2] sdb1[5] sda1[4] sdd1[3]
      4190208 blocks super 1.2 [4/4] [UUUU]
      
unused devices: <none>
/dev/md/0:
        Version : 1.2
  Creation Time : Tue Oct  1 13:39:03 2013
     Raid Level : raid1
     Array Size : 4190208 (4.00 GiB 4.29 GB)
  Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sun Aug  5 11:58:00 2018
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           Name : 0e34eaa8:0  (local to host 0e34eaa8)
           UUID : 46ed3050:bd74f808:884ba5d6:c86a0cfb
         Events : 509

    Number   Major   Minor   RaidDevice State
       2       8       33        0      active sync   /dev/sdc1
       3       8       49        1      active sync   /dev/sdd1
       4       8        1        2      active sync   /dev/sda1
       5       8       17        3      active sync   /dev/sdb1
/dev/md/1:
        Version : 1.2
  Creation Time : Sat Aug  4 15:24:35 2018
     Raid Level : raid10
     Array Size : 1046528 (1022.00 MiB 1071.64 MB)
  Used Dev Size : 523264 (511.00 MiB 535.82 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sat Aug  4 21:01:04 2018
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : 0e34eaa8:1  (local to host 0e34eaa8)
           UUID : da478a0c:b7b753e3:e90b032d:e1a461d9
         Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       34        0      active sync set-A   /dev/sdc2
       1       8       50        1      active sync set-B   /dev/sdd2
       2       8        2        2      active sync set-A   /dev/sda2
       3       8       18        3      active sync set-B   /dev/sdb2
/dev/md/data-0:
        Version : 1.2
  Creation Time : Tue Oct  1 13:39:03 2013
     Raid Level : raid5
     Array Size : 2915732352 (2780.66 GiB 2985.71 GB)
  Used Dev Size : 971910784 (926.89 GiB 995.24 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Aug  5 09:34:50 2018
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : 0e34eaa8:data-0  (local to host 0e34eaa8)
           UUID : 9a8d97bf:8eb9325a:eb8ef348:c02d0503
         Events : 9884

    Number   Major   Minor   RaidDevice State
       4       8        3        0      active sync   /dev/sda3
       5       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3
Message 10 of 16
Skeetboy1
Apprentice

Re: Having a dense moment here.....

Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md127 : active raid5 sda3[4] sdd3[3] sdc3[2] sdb3[5]
      2915732352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

md1 : active raid10 sdc2[0] sdb2[3] sda2[2] sdd2[1]
      1046528 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
      
md0 : active raid1 sdc1[2] sdb1[5] sda1[4] sdd1[3]
      4190208 blocks super 1.2 [4/4] [UUUU]
      
unused devices: <none>
/dev/md/0:
        Version : 1.2
  Creation Time : Tue Oct  1 13:39:03 2013
     Raid Level : raid1
     Array Size : 4190208 (4.00 GiB 4.29 GB)
  Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sun Aug  5 11:58:00 2018
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           Name : 0e34eaa8:0  (local to host 0e34eaa8)
           UUID : 46ed3050:bd74f808:884ba5d6:c86a0cfb
         Events : 509

    Number   Major   Minor   RaidDevice State
       2       8       33        0      active sync   /dev/sdc1
       3       8       49        1      active sync   /dev/sdd1
       4       8        1        2      active sync   /dev/sda1
       5       8       17        3      active sync   /dev/sdb1
/dev/md/1:
        Version : 1.2
  Creation Time : Sat Aug  4 15:24:35 2018
     Raid Level : raid10
     Array Size : 1046528 (1022.00 MiB 1071.64 MB)
  Used Dev Size : 523264 (511.00 MiB 535.82 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Sat Aug  4 21:01:04 2018
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : 0e34eaa8:1  (local to host 0e34eaa8)
           UUID : da478a0c:b7b753e3:e90b032d:e1a461d9
         Events : 19

    Number   Major   Minor   RaidDevice State
       0       8       34        0      active sync set-A   /dev/sdc2
       1       8       50        1      active sync set-B   /dev/sdd2
       2       8        2        2      active sync set-A   /dev/sda2
       3       8       18        3      active sync set-B   /dev/sdb2
/dev/md/data-0:
        Version : 1.2
  Creation Time : Tue Oct  1 13:39:03 2013
     Raid Level : raid5
     Array Size : 2915732352 (2780.66 GiB 2985.71 GB)
  Used Dev Size : 971910784 (926.89 GiB 995.24 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Aug  5 09:34:50 2018
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : 0e34eaa8:data-0  (local to host 0e34eaa8)
           UUID : 9a8d97bf:8eb9325a:eb8ef348:c02d0503
         Events : 9884

    Number   Major   Minor   RaidDevice State
       4       8        3        0      active sync   /dev/sda3
       5       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3
Message 11 of 16
mdgm-ntgr
NETGEAR Employee Retired

Re: Having a dense moment here.....

Are you using X-RAID? You should see a green line under X-RAID on the Volumes tab if so. Don’t change this setting now.

Can you send your logs in (see the Sending Logs link in my sig)?
Message 12 of 16
Skeetboy1
Apprentice

Re: Having a dense moment here.....

No, there is no green line underneath.

I will email the full set of logs. @StephenB only asked for the one above.

Message 13 of 16
StephenB
Guru

Re: Having a dense moment here.....


@Skeetboy1 wrote:

I will email the full set of logs. @StephenB only asked for the one above.


I'm not a mod, so I don't receive emailed logs. mdstat.log is generally a good place to look first, and it's usually short enough to cut and paste.

 

After a normal vertical expansion, the system will create a new raid layer (which should have been md126 in your case).  mdstat doesn't show that layer at all.  That's likely good news, as I think it's harder to deal with a failed expansion than with one that never actually started.

 

But wait for @mdgm-ntgr's response before you take any action.

 

 

Message 14 of 16
Skeetboy1
Apprentice

Re: Having a dense moment here.....

Thank you@StephenB.

 

Message 15 of 16
Skeetboy1
Apprentice

Re: Having a dense moment here.....

Logs have been sent, just checking that they have arrived successfully.

Message 16 of 16
Top Contributors
Discussion stats
  • 15 replies
  • 3237 views
  • 0 kudos
  • 3 in conversation
Announcements