× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: Drive Replace/Upgrade/Expansion in Mixed-size Scenario

btaroli
Prodigy

Drive Replace/Upgrade/Expansion in Mixed-size Scenario

Well, it's time to replace a couple of drives. I've got got with increasing pending/uncorrectible counts. Not high, but slowly creeping up. I've got 3x4TB and 5x8TB drives now. For the price, I'm considering 10TB or 12TB as replacements... and eventually get to at least 5 of the new size running in the system.

 

The question, really, is what is the best path to swapping/upgrading them to avoid potential pitfalls for the auto-expansion. Right now, I have two md volumes (one 4TB stripe on 8 drives, and one 4TB on 5 drives).

 

md126 : active raid6 sde4[0] sdd4[5] sdh4[3] sdg4[2] sdf4[1]
      11720636736 blocks super 1.2 level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
      
md127 : active raid6 sda3[0] sdh3[7] sdg3[6] sdf3[8] sde3[9] sdd3[10] sdc3[2] sdb3[1]
      23413000704 blocks super 1.2 level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
      
...

/dev/md/data-0:
           Version : 1.2
     Creation Time : Fri Dec 16 21:54:54 2016
        Raid Level : raid6
        Array Size : 23413000704 (22328.38 GiB 23974.91 GB)
     Used Dev Size : 3902166784 (3721.40 GiB 3995.82 GB)
      Raid Devices : 8
     Total Devices : 8
       Persistence : Superblock is persistent

       Update Time : Sat May 25 17:53:24 2019
             State : active 
    Active Devices : 8
   Working Devices : 8
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : unknown

              Name : 2fe75b42:data-0  (local to host 2fe75b42)
              UUID : 8067ab48:ffd7afa3:9eff813b:bef9f987
            Events : 132324

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
      10       8       51        3      active sync   /dev/sdd3
       9       8       67        4      active sync   /dev/sde3
       8       8       83        5      active sync   /dev/sdf3
       6       8       99        6      active sync   /dev/sdg3
       7       8      115        7      active sync   /dev/sdh3
/dev/md/data-1:
           Version : 1.2
     Creation Time : Fri Dec 23 10:45:21 2016
        Raid Level : raid6
        Array Size : 11720636736 (11177.67 GiB 12001.93 GB)
     Used Dev Size : 3906878912 (3725.89 GiB 4000.64 GB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Sat May 25 17:53:24 2019
             State : active 
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 64K

Consistency Policy : unknown

              Name : 2fe75b42:data-1  (local to host 2fe75b42)
              UUID : 7752d45e:1a192675:86cb19c8:a92065eb
            Events : 119212

    Number   Major   Minor   RaidDevice State
       0       8       68        0      active sync   /dev/sde4
       1       8       84        1      active sync   /dev/sdf4
       2       8      100        2      active sync   /dev/sdg4
       3       8      116        3      active sync   /dev/sdh4
       5       8       52        4      active sync   /dev/sdd4

One troublesome drive is 4TB (sdb) and the other is 8TB (sdf). My thought is to begin by replacing one of each of these, in turn, and then swap the remaining 4TB disks.

 

At first I'd replace the the 8TB (sdf) and 4TB (sdb), to ensure we don't have any questionable drives. The 4TB, during resync (or right after) would be extended so that it's a sixth drive in the second md data volume. That should be straightforward.

 

But upon replacing the two remaining 4TB disks, that wll reach the critical mass of qty 4 10 (or 12) TB disks to create a new md volume/stripe to cover the extra 2 or 4TB of capacity. So... what happens at this point? Theoretically I wind up in a situation where I have an md volume with 8TB drive (now the smallest drive) and one of 2 or 4TB (for the larger drives).

 

What I'm concerned about is whether I need to follow a prescribed series of replacements of drives of certain capacities FIRST in order to avoid potential expansion issues. For example, is it better to replace all the smallest drives FIRST and then replace the 8TB (sdf)? Or does it just not matter and it'll figure it out without any headaches?

 

Is there a VM like we had a long time ago in order to test such scenarios, or is the behavior of the expansion logic such that it's fairly predictable at this point?

 

Model: RN628X|ReadyNAS 628X - Ultimate Performance Business Data Storage - 8-Bay
Message 1 of 4

Accepted Solutions
StephenB
Guru

Re: Drive Replace/Upgrade/Expansion in Mixed-size Scenario


@btaroli wrote:

 

What I'm concerned about is whether I need to follow a prescribed series of replacements of drives of certain capacities FIRST in order to avoid potential expansion issues. For example, is it better to replace all the smallest drives FIRST and then replace the 8TB (sdf)? Or does it just not matter and it'll figure it out without any headaches?

 


The order shouldn't matter, since you are installing drives that are larger than anything in the array now.

 

Of course you will get more space up front if you replace the smallest drives first.  Each 4 TB drive you replace will give you 4 TB more volume now (with more expansion after you get to 4 of the new drives).

 


@btaroli wrote:

 

But upon replacing the two remaining 4TB disks, that wll reach the critical mass of qty 4 10 (or 12) TB disks to create a new md volume/stripe to cover the extra 2 or 4TB of capacity. So... what happens at this point? Theoretically I wind up in a situation where I have an md volume with 8TB drive (now the smallest drive) and one of 2 or 4TB (for the larger drives).

 


At that point you'd have either 4x10TB+4x8TB or 4x12TB+4x8TB correct?

 

If so, then if you go with 10TB you'd have a 52TB (40+32-20) volume, if you go with 12 TB you'd have 56 TB (48+32-24).

 

Either way, you'd have 3 RAID groups:  data-0 (8x4TB RAID-6), data-1 (expanded to 8x4TB RAID-6) and a new data-2 (either 4x2 TB RAID-6 or 4x4 TB RAID-6).  XRAID doesn't destroy RAID groups, it just layers new ones on top of the existing ones.

 

View solution in original post

Message 2 of 4

All Replies
StephenB
Guru

Re: Drive Replace/Upgrade/Expansion in Mixed-size Scenario


@btaroli wrote:

 

What I'm concerned about is whether I need to follow a prescribed series of replacements of drives of certain capacities FIRST in order to avoid potential expansion issues. For example, is it better to replace all the smallest drives FIRST and then replace the 8TB (sdf)? Or does it just not matter and it'll figure it out without any headaches?

 


The order shouldn't matter, since you are installing drives that are larger than anything in the array now.

 

Of course you will get more space up front if you replace the smallest drives first.  Each 4 TB drive you replace will give you 4 TB more volume now (with more expansion after you get to 4 of the new drives).

 


@btaroli wrote:

 

But upon replacing the two remaining 4TB disks, that wll reach the critical mass of qty 4 10 (or 12) TB disks to create a new md volume/stripe to cover the extra 2 or 4TB of capacity. So... what happens at this point? Theoretically I wind up in a situation where I have an md volume with 8TB drive (now the smallest drive) and one of 2 or 4TB (for the larger drives).

 


At that point you'd have either 4x10TB+4x8TB or 4x12TB+4x8TB correct?

 

If so, then if you go with 10TB you'd have a 52TB (40+32-20) volume, if you go with 12 TB you'd have 56 TB (48+32-24).

 

Either way, you'd have 3 RAID groups:  data-0 (8x4TB RAID-6), data-1 (expanded to 8x4TB RAID-6) and a new data-2 (either 4x2 TB RAID-6 or 4x4 TB RAID-6).  XRAID doesn't destroy RAID groups, it just layers new ones on top of the existing ones.

 

Message 2 of 4
btaroli
Prodigy

Re: Drive Replace/Upgrade/Expansion in Mixed-size Scenario


@StephenB wrote:

Either way, you'd have 3 RAID groups:  data-0 (8x4TB RAID-6), data-1 (expanded to 8x4TB RAID-6) and a new data-2 (either 4x2 TB RAID-6 or 4x4 TB RAID-6).  XRAID doesn't destroy RAID groups, it just layers new ones on top of the existing ones.

 


Not sure why I thought it'd collapse them... hmm. OK. Well, in that case I think I'm going to hold off until I am making a bigger jump. Sticking with 8's I can still bump up 12 extra TB and not trigger the creation of a new dm volume. I think I'll wait until I'm making a bigger jump, which is my usual. Back when I first did 8TB drives, they were 400-500 apiece! heh

Model: RN628X|ReadyNAS 628X - Ultimate Performance Business Data Storage - 8-Bay
Message 3 of 4
StephenB
Guru

Re: Drive Replace/Upgrade/Expansion in Mixed-size Scenario


@btaroli wrote:
I think I'm going to hold off until I am making a bigger jump. Sticking with 8's I can still bump up 12 extra TB and not trigger the creation of a new dm volume. I think I'll wait until I'm making a bigger jump, which is my usual.

That generally is more cost effective when you are expanding vertically.

 

Though when a 6 TB drive failed a few weeks ago, I decided to replace it with a 10 TB one.  

Message 4 of 4
Top Contributors
Discussion stats
  • 3 replies
  • 816 views
  • 0 kudos
  • 2 in conversation
Announcements