NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Mauser69
Sep 13, 2020Tutor
X-RAID Volume Smaller than expected
This is not a big deal, but it puzzles me, so I thought I would ask if anyone has an explanation. My RN214 was configured with 2x8 TB + 2x1.5 TB drives, and it reported a total volume size of 9.9...
Mauser69
Sep 13, 2020Tutor
StephenB wrote:But you can't upgrade the 1.5 TB drives to 3 TB drives with XRAID. You can replace drives, or upgrade drives to the largest size in the array (or larger). But (apart from some rare exceptions) you can't upgrade a smaller drive to an intermediate size. That was why the space didn't change.
The reason the upgrade to 3 TB didn't work is that the system would have had to repartition the 6.5 partition on the 8 TB drives. XRAID won't do that.
If you want all the extra space, you'll need to re-create the volume and restore the files from backup.
Actually, the drive upgrade from 1.5 TB to 3 TB drives DID work just fine, and the total volume space DID increase, just not by quite as much as I expected (I added 3 TB total, but the volume size only increased by 1.4 TB).
I suspected that if I started over fresh with the 4 current drives, maybe the system would find the missing space (but I did not know why - it was just a guess). However, since the missing 1.3 TB of space is quite small in comparison to the entire volume of 11.3 TB, it is really not worth the work to try and start over again. But I'll remember that possibility if I try to vertically expand the volume again in the future.
StephenB
Sep 13, 2020Guru - Experienced User
Mauser69 wrote:
Actually, the drive upgrade from 1.5 TB to 3 TB drives DID work just fine, and the total volume space DID increase, just not by quite as much as I expected (I added 3 TB total, but the volume size only increased by 1.4 TB).
We did see something similar a few weeks ago - possibly something has changed a bit in XRAID.
I think if you look in mdstat.log, you will find that you now have three raid groups for the data volume.
- 4 x 1.5 TB RAID-5 across all disks
- 2 x 6.5 TB RAID-1 across the two 8 TB disks
- 2 x 1.5 TB RAID-1 across the two 3 TB disks.
Can you post mdstat.log?
- Mauser69Sep 17, 2020Tutor
StephenB wrote:
Mauser69 wrote:Actually, the drive upgrade from 1.5 TB to 3 TB drives DID work just fine, and the total volume space DID increase, just not by quite as much as I expected (I added 3 TB total, but the volume size only increased by 1.4 TB).
We did see something similar a few weeks ago - possibly something has changed a bit in XRAID.
I think if you look in mdstat.log, you will find that you now have three raid groups for the data volume.
- 4 x 1.5 TB RAID-5 across all disks
- 2 x 6.5 TB RAID-1 across the two 8 TB disks
- 2 x 1.5 TB RAID-1 across the two 3 TB disks.
Can you post mdstat.log?
I have not forgotten about your request - I had never saved the logs and looked at the detail before, so just now trying to sort through them. I found the mdstat.log, and although I do not understand a lot of what I am seeing, I do think I see the two RAID1 groups that you suspected. This is very different than what the User's Manual would have us believe! Trying to figure out how to post this log now...
Looks like I cannot attach it as a file, so I'll try to copy/paste the full text?
Just for full background info, this X-RAID volume originally started as 2x3 TB,
later added 2x1.5 TB (just a couple of old drives I had sitting around)
later changed out the 2x3 drives for 2x8 TB,
and just recently changed out the 2x1.5 TB for 2x3 TB (because one of the 1.5 TB drives was throwing errors).
With what I think I understand today, I would never have bought the two 8 TB drives - I think I would have gotten much more bang for the buck with 4x3 TB (especially since I already had two of the 3 TB drives sitting on the shelf). I have about 5 TB of data on the NAS that is just a copy of other external drives, so none of that needs the NAS redundancy at all. It seems to me I would be far better served to attach one of the 8 TB drives as a USB share to hold this redundant data and reserve the X-RAID volume for what I really need to protect. The only negative I can think of to that solution is the extra power consumed by another drive enclosure added to the power needed by the four smaller drives in the NAs.
I really do appreciate all the help you guys have tried to give me in understanding how this X-RAID volume works. It may SEEM easier for the user (in the documentation) than traditional RAID configurations, but I think I prefer the certainty of doing my own volume configurations instead of just letting the NAS do whatever the hell it wants!
Here is the mdstat.log:
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md125 : active raid1 sda4[2] sdb4[3]
6348755904 blocks super 1.2 [2/2] [UU]
md126 : active raid5 sdc3[6] sdb3[5] sda3[4] sdd3[7]
4380866496 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md127 : active raid1 sdc4[0] sdd4[1]
1464995904 blocks super 1.2 [2/2] [UU]
md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md0 : active raid1 sdc1[6] sdb1[5] sda1[4] sdd1[7]
4190208 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Wed Jun 12 12:17:09 2019
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistentUpdate Time : Thu Sep 17 19:43:56 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0Consistency Policy : unknown
Name : 40564318:0 (local to host 40564318)
UUID : da2bd3ff:5fdd9904:b419a5c6:12900948
Events : 9334Number Major Minor RaidDevice State
6 8 33 0 active sync /dev/sdc1
7 8 49 1 active sync /dev/sdd1
4 8 1 2 active sync /dev/sda1
5 8 17 3 active sync /dev/sdb1
/dev/md/1:
Version : 1.2
Creation Time : Sat Sep 12 05:56:59 2020
Raid Level : raid10
Array Size : 1044480 (1020.00 MiB 1069.55 MB)
Used Dev Size : 522240 (510.00 MiB 534.77 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistentUpdate Time : Thu Sep 17 08:03:00 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0Layout : near=2
Chunk Size : 512KConsistency Policy : unknown
Name : 40564318:1 (local to host 40564318)
UUID : d5780a51:e2f4553a:db7c25fc:d8ead871
Events : 19Number Major Minor RaidDevice State
0 8 2 0 active sync set-A /dev/sda2
1 8 18 1 active sync set-B /dev/sdb2
2 8 34 2 active sync set-A /dev/sdc2
3 8 50 3 active sync set-B /dev/sdd2
/dev/md/data-0:
Version : 1.2
Creation Time : Wed Jun 12 12:17:40 2019
Raid Level : raid5
Array Size : 4380866496 (4177.92 GiB 4486.01 GB)
Used Dev Size : 1460288832 (1392.64 GiB 1495.34 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistentUpdate Time : Thu Sep 17 19:08:06 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0Layout : left-symmetric
Chunk Size : 64KConsistency Policy : unknown
Name : 40564318:data-0 (local to host 40564318)
UUID : 21cf3146:5288155f:bb06831a:a11ca83c
Events : 9657Number Major Minor RaidDevice State
6 8 35 0 active sync /dev/sdc3
7 8 51 1 active sync /dev/sdd3
4 8 3 2 active sync /dev/sda3
5 8 19 3 active sync /dev/sdb3
/dev/md/data-1:
Version : 1.2
Creation Time : Tue Jun 18 16:30:41 2019
Raid Level : raid1
Array Size : 6348755904 (6054.65 GiB 6501.13 GB)
Used Dev Size : 6348755904 (6054.65 GiB 6501.13 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistentUpdate Time : Thu Sep 17 19:08:06 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0Consistency Policy : unknown
Name : 40564318:data-1 (local to host 40564318)
UUID : a18f5563:46a549cd:b3dda2ea:6b283414
Events : 426Number Major Minor RaidDevice State
2 8 4 0 active sync /dev/sda4
3 8 20 1 active sync /dev/sdb4
/dev/md/data-2:
Version : 1.2
Creation Time : Sat Sep 12 12:03:57 2020
Raid Level : raid1
Array Size : 1464995904 (1397.13 GiB 1500.16 GB)
Used Dev Size : 1464995904 (1397.13 GiB 1500.16 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistentUpdate Time : Thu Sep 17 19:08:06 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0Consistency Policy : unknown
Name : 40564318:data-2 (local to host 40564318)
UUID : 83ab22b0:0b37ffbb:ebe83e70:63914a3f
Events : 61Number Major Minor RaidDevice State
0 8 36 0 active sync /dev/sdc4
1 8 52 1 active sync /dev/sdd4- StephenBSep 18, 2020Guru - Experienced User
Mauser69 wrote:
I have not forgotten about your request - I had never saved the logs and looked at the detail before, so just now trying to sort through them. I found the mdstat.log, and although I do not understand a lot of what I am seeing, I do think I see the two RAID1 groups that you suspected. This is very different than what the User's Manual would have us believe! Trying to figure out how to post this log now...
The best way to post it is to copy/paste it into a forum post (as you did). Though the </> "insert code" tool in the forum toolbar is a bit neater.
StephenB wrote:
I think if you look in mdstat.log, you will find that you now have three raid groups for the data volume.
- 4 x 1.5 TB RAID-5 across all disks
- 2 x 6.5 TB RAID-1 across the two 8 TB disks
- 2 x 1.5 TB RAID-1 across the two 3 TB disks.
That's exactly what we are seeing - reorganizing some of the mdstat info to make that more clear.
md126 : active raid5 sdc3[6] sdb3[5] sda3[4] sdd3[7] 4380866496 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
/dev/md/data-0: Raid Level : raid5 Array Size : 4380866496 (4177.92 GiB 4486.01 GB)
md125 : active raid1 sda4[2] sdb4[3] 6348755904 blocks super 1.2 [2/2] [UU]
/dev/md/data-1: Raid Level : raid1 Array Size : 6348755904 (6054.65 GiB 6501.13 GB) md127 : active raid1 sdc4[0] sdd4[1] 1464995904 blocks super 1.2 [2/2] [UU]
/dev/md/data-2: Raid Level : raid1 Array Size : 1464995904 (1397.13 GiB 1500.16 GB)It's not clear when XRAID started doing this (and it does make it much harder to tell folks what capacity they will end up with).
Right now, it is giving you as much space as it can w/o repartitioning the disks - which is nice. But if you were to upgrade to 4x8TB later on, you'd end up with a lot less space than with the old rules - about 17.5 TB instead 24 TB. You'd need to do a factory reset to fix this.
Based on posts here, I think most folks do end up with equal size disks over time. So I think the old rules were actually better. Marc_V , JohnCM_S: Can you clarify this with development - whether the behavior is intentional or a bug, and perhaps provide my own feedback on why I think the original rules were better?
Mauser69 wrote: With what I think I understand today, I would never have bought the two 8 TB drives - I think I would have gotten much more bang for the buck with 4x3 TB (especially since I already had two of the 3 TB drives sitting on the shelf). I have about 5 TB of data on the NAS that is just a copy of other external drives, so none of that needs the NAS redundancy at all. It seems to me I would be far better served to attach one of the 8 TB drives as a USB share to hold this redundant data and reserve the X-RAID volume for what I really need to protect. The only negative I can think of to that solution is the extra power consumed by another drive enclosure added to the power needed by the four smaller drives in the NAs.
That could well be cheaper, though personally I've stopped using USB drives. I haven't found them to be as reliable as internal ones. One aspect is that USB drives have shifted to SMR technology (since that gives a bit more capacity). That's ok for archival material though.
I do want to stress that you still do need backups for the internal RAID array - RAID redundancy is convenient, but it is not enough to protect your data.
Mauser69 wrote:
I really do appreciate all the help you guys have tried to give me in understanding how this X-RAID volume works. It may SEEM easier for the user (in the documentation) than traditional RAID configurations, but I think I prefer the certainty of doing my own volume configurations instead of just letting the NAS do whatever the hell it wants!
I am glad we could help.
I do think XRAID is the way to go for most users, as understanding RAID groups is just too complicated for most.
I agree with the the uncertainty aspect of the new rules. Most users also lose track of their expansion history, and that affects the outcome more than it did in the past.
It would be good to add a targetted XRAID calculator into the admin UI, that would tell you precisely what you would end up with in your own system before you actually do an upgrade (and also tell you if you'd get more space with a factory reset).
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!