- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
Re: Failed XRAID expansion on Pro Pioneer
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Failed XRAID expansion on Pro Pioneer
Hi
I have a Pro Pioneer OS 6.4.1RC1 which had 4x4TB in XRAID2 (RAID5) giving a volume of about 10.7TB
I just added 2x6TB but only gained 2TB space.
I was expecting (given my understanding of XRAID layers) a 6x4TB RAID5 layer (=approx 18TB space) and a 2x2TB RAID1 layer (=approx 1.7TB space) but it hasn't happened. I wasnt given any choice when I inserted the discs.
So - can I fix this without having to factory reset??
If this is dual redundancy (although the manual suggests you can only set dual redundancy in FlexRAID mode) can I just pull one of the 6TB, somehow force it into single redundancy and then re-add (perhaps reformatting outside the nas) the extra 6TB disc.
The ultimate aim is to now vertically expand over the next year or so to 6x6TB as needed.
I do have a backup but its on my Ultra4 with JBOD 8TB drives so not redundant (as it was a backup).
The log says:
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md126 : active raid1 sde4[0] sdf4[1] 1953374912 blocks super 1.2 [2/2] [UU] md127 : active raid5 sdd3[0] sdf3[5](S) sde3[4](S) sda3[3] sdb3[2] sdc3[1] 11706500352 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] md1 : active raid6 sda2[0] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1] 2093056 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] md0 : active raid1 sdd1[0] sde1[4] sdf1[5] sda1[3] sdb1[2] sdc1[1] 4190208 blocks super 1.2 [6/6] [UUUUUU]
If this helps.
Logs emailed.
Andy
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Failed XRAID expansion on Pro Pioneer
I have rebooted several times. Clicked diagnostics on RAIDar and it came up with
2015-11-24 17:44:56: journalctl[3802]: segfault at 7fa002583df0 ip 00007fa00b388f71 sp 00007ffc74616a50 error 4 in libsystemd-journal.so.0.0.3[7fa00b386000+11000] 2015-11-24 17:44:56: journalctl[3815]: segfault at 7fafdee8fdc8 ip 00007fafe447ef71 sp 00007ffecb4cd780 error 4 in libsystemd-journal.so.0.0.3[7fafe447c000+11000] 2015-11-24 17:44:54: journalctl[3702]: segfault at 7f56b5245ea8 ip 00007f56bbc5bf71 sp 00007fffba7727d0 error 4 in libsystemd-journal.so.0.0.3[7f56bbc59000+11000
No idea if this is relevant.
I have tried rebooting several times.
Help!!!
Thanks
Andy
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Failed XRAID expansion on Pro Pioneer
Your logs indicate that your disks were migrated a few days ago from an ARM box to your Pro Pioneer.
See the (S) next to two of the disks. That indicates that the disks are considered as spares. Manually forcing a reshape to grow the volume to use all the disks would resolve it. After the reshape has completed the volume should then expand.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Failed XRAID expansion on Pro Pioneer
Hi mdgm,
Thank you. Skywalker has already remote accessed and forced a reshape - I suspect he did the same.
I must say I forget to mention that - I didn't even think about that as the migration was successful. Could that be the cause of the problem??
I've been trying to interperet the mdstat information.
My interpretation is that md127 is the RAID5 array over all 6 disks that didn't properly expand beyond 4. md126 must then be the RAD1 array over the 'extra' 2GB on the 2x6TB discs. So presumably when I add more it would be this array that would change into a RAID5 array and expand - so if this didn't automatically happen would the correct command be # mdadm --grow --raid-devices=3 /dev/md126 (assuming I swapped a 4TB for a 6TB) - or do you need something extra to change RAID1 into 5.
What are md1 and md0??
Many thanks for your expert advice.
Kind regards,
Andy
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Failed XRAID expansion on Pro Pioneer
I don't think the migration would be the cause of that, thoug I guess anything is possible. Sometimes when disks are added the added disks are marked as spare rather than added to the array. This is easy to fix.
It should happen automatically and there are some things to check before going off and forcing the volume to grow.
To force the reshape to grow and to set the new raid-level you could do e.g.
# mdadm --grow --level=5 --raid-devices=3 /dev/md126
md0 is the OS array and md1 is for swap. Both md0 and md1 should use all the disks.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Failed XRAID expansion on Pro Pioneer
Thanks for clarifying.
What should be checked??
A
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Failed XRAID expansion on Pro Pioneer
That the new partitions are the right size and that the disks are healthy.