NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
chsu83
Mar 29, 2017Luminary
ReadyNAS Volume not expanding - with encryption
Don't not the exact Pro 6 Model..
I've got a Pro6 with
2x 2TB and 2x 3TB
All worked fine except the XRAID2 isn't working.. at least not how expected.
I've installed ReadyNAS OS6 with factory reset.
Deleted the volume and recreated it.
Because the default Data-XRAID volume isn't encrypted.
So marking all four disks. Create Volume Raid 5 with encryption. And activate XRAID.
Tested it on VM Appliance (without encryption) and it worked.
First it created a "normal Raid 5" and after that it took the free place from the 3TB disk and created a raid 1.. at least roughly it seems so. And its also what I expect.
But now on the Pro 6 it does not expand further than 4x 2 TB in Raid5.
Is there something to nudge the expand process? Or does it simply not work with encrypted volume. (Because all other tests with XRAID worked fine and i got somewhat around 6.3 TBs.)
md1 is the swap volume, so no need to touch that.
If the volume is encrypted, X-RAID won't be able to create sub RAID arrays on unused disk capacity in case of mixed capacity HDDs. It's just not compatible.
This is why the logs say "
Skipping X-RAID auto-expansion on encrypted pool data
Your solutions: use disks of the same capacity, or create separate encrypted volumes, or don't use volume encryption.
8 Replies
Replies have been turned off for this discussion
- chsu83Luminary
fdisk -l /dev/sda Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 4AB6C267-7EF1-410C-9DB1-746EFAC1141F Device Start End Sectors Size Type /dev/sda1 64 8388671 8388608 4G Linux RAID /dev/sda2 8388672 9437247 1048576 512M Linux RAID /dev/sda3 9437248 3907029119 3897591872 1.8T Linux RAID
fdisk -l /dev/sdb Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 14C08FED-6110-48ED-9CB6-E891D3A1E6CC Device Start End Sectors Size Type /dev/sdb1 64 8388671 8388608 4G Linux RAID /dev/sdb2 8388672 9437247 1048576 512M Linux RAID /dev/sdb3 9437248 3907029119 3897591872 1.8T Linux RAID
fdisk -l /dev/sdc Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 467D1AFA-EEC9-4A6A-BED6-7312DFA34629 Device Start End Sectors Size Type /dev/sdc1 64 8388671 8388608 4G Linux RAID /dev/sdc2 8388672 9437247 1048576 512M Linux RAID /dev/sdc3 9437248 3907029119 3897591872 1.8T Linux RAID
fdisk -l /dev/sdd Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 0A183F38-CFFF-486A-AA02-81EA4A0D7EC4 Device Start End Sectors Size Type /dev/sdd1 64 8388671 8388608 4G Linux RAID /dev/sdd2 8388672 9437247 1048576 512M Linux RAID /dev/sdd3 9437248 3907029119 3897591872 1.8T Linux RAID
fdisk -l /dev/sdb Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 14C08FED-6110-48ED-9CB6-E891D3A1E6CC Device Start End Sectors Size Type /dev/sdb1 64 8388671 8388608 4G Linux RAID /dev/sdb2 8388672 9437247 1048576 512M Linux RAID /dev/sdb3 9437248 3907029119 3897591872 1.8T Linux RAID
- chsu83Luminary
puh.. it seems like theres something over from the raid6 test :S
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md127 : active raid5 sda3[0] sdd3[3] sdc3[2] sdb3[1] 5845994496 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] [>....................] resync = 0.5% (10276820/1948664832) finish=291.4min speed=110830K/sec md1 : active raid6 sda2[0] sdc2[3] sdd2[2] sdb2[1] 1046528 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] md0 : active raid1 sda1[0] sdc1[2] sdd1[3] sdb1[1] 4190208 blocks super 1.2 [4/4] [UUUU] unused devices: <none> /dev/md/0: Version : 1.2 Creation Time : Tue Mar 28 23:30:15 2017 Raid Level : raid1 Array Size : 4190208 (4.00 GiB 4.29 GB) Used Dev Size : 4190208 (4.00 GiB 4.29 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed Mar 29 21:33:52 2017 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Name : 33eb049b:0 (local to host 33eb049b) UUID : 73f582d4:eb212081:661a269e:0369cfed Events : 238 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 3 8 49 2 active sync /dev/sdd1 2 8 33 3 active sync /dev/sdc1 /dev/md/1: Version : 1.2 Creation Time : Wed Mar 29 00:07:57 2017 Raid Level : raid6 Array Size : 1046528 (1022.00 MiB 1071.64 MB) Used Dev Size : 523264 (511.00 MiB 535.82 MB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed Mar 29 00:29:12 2017 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : 33eb049b:1 (local to host 33eb049b) UUID : 8af43a31:0aabdd5a:7cbd7294:fc716ca5 Events : 20 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 2 8 50 2 active sync /dev/sdd2 3 8 34 3 active sync /dev/sdc2 /dev/md/data-0: Version : 1.2 Creation Time : Wed Mar 29 07:48:01 2017 Raid Level : raid5 Array Size : 5845994496 (5575.17 GiB 5986.30 GB) Used Dev Size : 1948664832 (1858.39 GiB 1995.43 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed Mar 29 21:32:46 2017 State : clean, resyncing Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Resync Status : 0% complete Name : 33eb049b:data-0 (local to host 33eb049b) UUID : b8e3be8c:5c3f556b:90d6189c:4a27892f Events : 134 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 8 35 2 active sync /dev/sdc3 3 8 51 3 active sync /dev/sdd3
- chsu83Luminary
Have done:
mdadm --grow /dev/md/1 --level=raid5
mdadm --grow /dev/md/1 --raid-devices=4
rebooted
but have
Skipping X-RAID auto-expansion on encrypted pool data
in LOG
- chsu83Luminary
seems perhaps expanding doesn't work.. at least for uneven devices.
But this seems like it should work:
https://community.netgear.com/t5/Using-your-ReadyNAS/Setting-up-encryption/td-p/841686
ok that seems a statement:
https://community.netgear.com/t5/Using-your-ReadyNAS/X-RAID-expansion-on-Pro4-with-OS6/td-p/904490
are there any newer knowings?
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!