NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
fastfwd
Apr 14, 2016Virtuoso
Pro 6 running OS4 - Convert RAID5 to RAID6
My Pro 6 held 5 drives (2TB, 3TB, and 3x4TB) in a single-redundant array (3 layers: 8TB, 3TB, and 2TB). I added a 6th drive (4TB) and chose to use it for dual-redundancy rather than increased capacity.
The reshape process started with no problem, but it was interrupted twice. The first interruption was a power outage, which seemed to have no effect (the UPS did its job and the NAS powered down gracefully, then the NAS resumed reshaping when the power was restored). The second interruption was the NAS's regularly-scheduled RAID scrub.
The RAID scrub completed without errors but it seems to have left my array in a half-reshaped state. Specifically, my 8TB layer is now RAID6, but the 3TB and 2TB layers are each RAID5 with a spare (the intermediate configuration that a RAID5 array goes through on its way to becoming a RAID6 array).
The NAS seems to think it's done; I've rebooted three times and none of those reboots triggered a continuation of the reshape process.
So here are my questions:
1. Is this the expected outcome for the process, or should all three layers have been converted to RAID6?
2. The two RAID5-with-spare arrays are /dev/md3 and /dev/md4. Is it safe for me to force them to convert to RAID6 like this:
mdadm --grow /dev/md3 --level=6 --raid-devices=5 --backup-file=/root/md3backup
mdadm --grow /dev/md4 --level=6 --raid-devices=4 --backup-file=/root/md4backup
3. Is there a better way to get them to convert to RAID6?
For reference, here's the current state of things; if more information would be helpful, please don't hesitate to ask for it:
NAS1:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md4 : active raid5 sda5[2] sdf5[4](S) sdd5[3] sde5[1]
1953501568 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md3 : active raid5 sdc4[0] sdf4[6](S) sda4[4] sde4[2] sdd4[5]
2930252352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md2 : active raid6 sda3[5] sdf3[7] sde3[4] sdd3[6] sdc3[2] sdb3[1]
7795170816 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]
md1 : active raid6 sda2[5] sdf2[8] sde2[4] sdd2[6] sdc2[7] sdb2[1]
2096896 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]
md0 : active raid1 sda1[5] sdf1[8] sde1[4] sdd1[6] sdc1[7] sdb1[1]
4193268 blocks super 1.2 [6/6] [UUUUUU]
unused devices: <none>
NAS1:~# mdadm --detail /dev/md4
/dev/md4:
Version : 1.2
Creation Time : Mon Oct 21 17:55:01 2013
Raid Level : raid5
Array Size : 1953501568 (1863.00 GiB 2000.39 GB)
Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Apr 13 22:54:11 2016
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Name : 001F33EA3F13:4
UUID : 9451c9b3:418fc220:2a516cc4:20253e0c
Events : 2664
Number Major Minor RaidDevice State
2 8 5 0 active sync /dev/sda5
1 8 69 1 active sync /dev/sde5
3 8 53 2 active sync /dev/sdd5
4 8 85 - spare /dev/sdf5
NAS1:~# mdadm --detail /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Tue Oct 8 01:29:22 2013
Raid Level : raid5
Array Size : 2930252352 (2794.51 GiB 3000.58 GB)
Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
Raid Devices : 4
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Wed Apr 13 22:55:02 2016
State : clean
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Name : 001F33EA3F13:3
UUID : 8deb24cd:96196f51:530cb8c6:ec74048e
Events : 3595
Number Major Minor RaidDevice State
0 8 36 0 active sync /dev/sdc4
5 8 52 1 active sync /dev/sdd4
2 8 68 2 active sync /dev/sde4
4 8 4 3 active sync /dev/sda4
6 8 84 - spare /dev/sdf4
NAS1:~# mdadm --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Mon Oct 7 20:41:39 2013
Raid Level : raid6
Array Size : 7795170816 (7434.05 GiB 7982.25 GB)
Used Dev Size : 1948792704 (1858.51 GiB 1995.56 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Wed Apr 13 23:09:59 2016
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : 001F33EA3F13:2
UUID : bd10ce1e:cc674523:3b3fcf56:adffffbb
Events : 3386316
Number Major Minor RaidDevice State
5 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
6 8 51 3 active sync /dev/sdd3
4 8 67 4 active sync /dev/sde3
7 8 83 5 active sync /dev/sdf3
NAS1:~# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Mon Oct 7 20:41:38 2013
Raid Level : raid6
Array Size : 2096896 (2048.09 MiB 2147.22 MB)
Used Dev Size : 524224 (512.02 MiB 536.81 MB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Wed Apr 13 09:27:27 2016
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : 001F33EA3F13:1
UUID : cb522e6f:ebb66434:7b5e2b6b:dd11c38b
Events : 138
Number Major Minor RaidDevice State
5 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
7 8 34 2 active sync /dev/sdc2
6 8 50 3 active sync /dev/sdd2
4 8 66 4 active sync /dev/sde2
8 8 82 5 active sync /dev/sdf2
NAS1:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Oct 7 20:41:38 2013
Raid Level : raid1
Array Size : 4193268 (4.00 GiB 4.29 GB)
Used Dev Size : 4193268 (4.00 GiB 4.29 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Wed Apr 13 23:11:56 2016
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Name : 001F33EA3F13:0
UUID : d11ee59a:62313c9e:8d6141be:a70fbe64
Events : 663
Number Major Minor RaidDevice State
5 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
7 8 33 2 active sync /dev/sdc1
6 8 49 3 active sync /dev/sdd1
4 8 65 4 active sync /dev/sde1
8 8 81 5 active sync /dev/sdf1
- Generally you'd do the conversion to RAID-6 with a single layer array
- That should work. Check to make sure the partitions are all the same size for the devices in that layer and the SMART stats of the disks first. Is your backup up to date? If not, you may also wish to update that.
- No. This is mdadm and that's the command to use. Note: you shouldn't try shrinking a layer as that would lead to data loss.
8 Replies
Replies have been turned off for this discussion
- mdgm-ntgrNETGEAR Employee Retired
- Generally you'd do the conversion to RAID-6 with a single layer array
- That should work. Check to make sure the partitions are all the same size for the devices in that layer and the SMART stats of the disks first. Is your backup up to date? If not, you may also wish to update that.
- No. This is mdadm and that's the command to use. Note: you shouldn't try shrinking a layer as that would lead to data loss.
- fastfwdVirtuoso
mdgm wrote:
Generally you'd do the conversion to RAID-6 with a single layer arrayYeah. But unfortunately I already had 3 layers, and I figured it'd be safest to upgrade to dual-redundancy first, before I put this large array through the stress of two expansions.
That should work. Check to make sure the partitions are all the same size for the devices in that layer and the SMART stats of the disks first. Is your backup up to date? If not, you may also wish to update that.Thanks! Disks are healthy and my backup is current. The relevant artitions are equal-sized:
NAS1:~# cat /proc/partitions | grep sd.5 8 5 976751999 sda5 8 53 976751999 sdd5 8 69 976751999 sde5 8 85 976751999 sdf5 NAS1:~# cat /proc/partitions | grep sd.4 8 4 976751979 sda4 8 36 976751979 sdc4 8 52 976751979 sdd4 8 68 976751979 sde4 8 84 976751979 sdf4
I'm starting the process now. I'll post again when it's done.
- fastfwdVirtuoso
Oh, and I just noticed an error in my original post (the --raid-devices parameters were wrong.) The commands should have been:
mdadm --grow /dev/md3 --level=6 --raid-devices=5 --backup-file=/root/md3backup mdadm --grow /dev/md4 --level=6 --raid-devices=4 --backup-file=/root/md4backup
- fastfwdVirtuoso
Thanks again, mdgm. The NAS just finished restriping the final layer. Now the whole array is RAID6 and I feel a lot more confident that it'll survive the upcoming expansions from 11TB to 13TB and then to 16TB.
- fastfwdVirtuoso
.... from 11TB to 13TB and then to 16TB.I mean, from 13TB to 14TB and then to 16TB.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!