- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
Hi
I have a Pro6 on 6.9.3 that had 6x6TB in XRAID2 (single redundancy). I've sequentially added 3x10TB and today added the 4th. It appears the disk was used before (something i'll take up with Amazon) so I destroyed the partition and then restarted.
This time on the volume screen lists 3 different RAID groups (never has done before) and the XRAID on the right is grey not green. I have not consciously chosen to remove XRAID as I want, in the future, to swap out the remaining 2 drives.
Can someone tell me what's going on!
Thanks Andy
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid10 sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1] sda2[0] 1569792 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU] md125 : active raid5 sdd4[7] sde4[0] sdc4[6] sdb4[9] sda4[8] sdf4[1] 9766874560 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [UUUUU_] resync=DELAYED md126 : active raid5 sdd3[9] sdc3[8] sde3[4] sdf3[5] sda3[6] sdb3[7] 19510833920 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/5] [_UUUUU] [>....................] recovery = 0.9% (37969280/3902166784) finish=506.6min speed=127112K/sec md127 : active raid5 sda5[0] sdb5[2] sdc5[1] 7811566336 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] md0 : active raid1 sdd1[9] sdc1[8] sde1[4](W) sdf1[5](W) sda1[6] sdb1[7] 4190208 blocks super 1.2 [6/6] [UUUUUU] unused devices: <none> /dev/md/0: Version : 1.2 Creation Time : Thu Feb 13 19:50:28 2014 Raid Level : raid1 Array Size : 4190208 (4.00 GiB 4.29 GB) Used Dev Size : 4190208 (4.00 GiB 4.29 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Thu Jul 26 16:28:09 2018 State : clean Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Name : 33ea2557:0 (local to host 33ea2557) UUID : a425c11d:4b9480fd:8fa2e4ff:906c719b Events : 1964648 Number Major Minor RaidDevice State 9 8 49 0 active sync /dev/sdd1 8 8 33 1 active sync /dev/sdc1 7 8 17 2 active sync /dev/sdb1 6 8 1 3 active sync /dev/sda1 5 8 81 4 active sync writemostly /dev/sdf1 4 8 65 5 active sync writemostly /dev/sde1 /dev/md/data-0: Version : 1.2 Creation Time : Thu Feb 13 19:50:29 2014 Raid Level : raid5 Array Size : 19510833920 (18606.98 GiB 19979.09 GB) Used Dev Size : 3902166784 (3721.40 GiB 3995.82 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Thu Jul 26 16:27:45 2018 State : clean, degraded, recovering Active Devices : 5 Working Devices : 6 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 0% complete Name : 33ea2557:data-0 (local to host 33ea2557) UUID : 92b2bc5f:cf375e4b:5dcb468f:89d2bd81 Events : 13331348 Number Major Minor RaidDevice State 9 8 51 0 spare rebuilding /dev/sdd3 8 8 35 1 active sync /dev/sdc3 7 8 19 2 active sync /dev/sdb3 6 8 3 3 active sync /dev/sda3 5 8 83 4 active sync /dev/sdf3 4 8 67 5 active sync /dev/sde3 /dev/md/data-1: Version : 1.2 Creation Time : Tue Nov 24 12:23:49 2015 Raid Level : raid5 Array Size : 9766874560 (9314.42 GiB 10001.28 GB) Used Dev Size : 1953374912 (1862.88 GiB 2000.26 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Thu Jul 26 16:22:46 2018 State : clean, degraded, resyncing (DELAYED) Active Devices : 5 Working Devices : 6 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Name : 33ea2557:data-1 (local to host 33ea2557) UUID : 4010d079:27f07b46:33b86d75:a3989442 Events : 15430 Number Major Minor RaidDevice State 0 8 68 0 active sync /dev/sde4 1 8 84 1 active sync /dev/sdf4 8 8 4 2 active sync /dev/sda4 9 8 20 3 active sync /dev/sdb4 6 8 36 4 active sync /dev/sdc4 7 8 52 5 spare rebuilding /dev/sdd4 /dev/md/data-2: Version : 1.2 Creation Time : Sun Jul 22 21:41:37 2018 Raid Level : raid5 Array Size : 7811566336 (7449.69 GiB 7999.04 GB) Used Dev Size : 3905783168 (3724.85 GiB 3999.52 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Thu Jul 26 16:22:20 2018 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : 33ea2557:data-2 (local to host 33ea2557) UUID : 1530c06e:fec73963:800bf076:2cf6cf0d Events : 5499 Number Major Minor RaidDevice State 0 8 5 0 active sync /dev/sda5 1 8 37 1 active sync /dev/sdc5 2 8 21 2 active sync /dev/sdb5
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
Hi
So it hasn't worked - finished resyncing - the volume size hasn't grown and it still shows the RAID groups with XRAID greyed out. What can I do from here??
Tx
Andy
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md125 : active raid5 sde4[0] sdd4[7] sdc4[6] sdb4[9] sda4[8] sdf4[1] 9766874560 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] md126 : active raid5 sdd3[9] sde3[4] sdf3[5] sda3[6] sdb3[7] sdc3[8] 19510833920 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] md127 : active raid5 sda5[0] sdb5[2] sdc5[1] 7811566336 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] md1 : active raid10 sda2[0] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1] 1569792 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU] md0 : active raid1 sdd1[9] sde1[4](W) sdf1[5](W) sda1[6] sdb1[7] sdc1[8] 4190208 blocks super 1.2 [6/6] [UUUUUU] unused devices: <none> /dev/md/0: Version : 1.2 Creation Time : Thu Feb 13 19:50:28 2014 Raid Level : raid1 Array Size : 4190208 (4.00 GiB 4.29 GB) Used Dev Size : 4190208 (4.00 GiB 4.29 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Fri Jul 27 07:39:54 2018 State : clean Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Name : 33ea2557:0 (local to host 33ea2557) UUID : a425c11d:4b9480fd:8fa2e4ff:906c719b Events : 1964650 Number Major Minor RaidDevice State 9 8 49 0 active sync /dev/sdd1 8 8 33 1 active sync /dev/sdc1 7 8 17 2 active sync /dev/sdb1 6 8 1 3 active sync /dev/sda1 5 8 81 4 active sync writemostly /dev/sdf1 4 8 65 5 active sync writemostly /dev/sde1 /dev/md/1: Version : 1.2 Creation Time : Thu Jul 26 16:22:47 2018 Raid Level : raid10 Array Size : 1569792 (1533.00 MiB 1607.47 MB) Used Dev Size : 523264 (511.00 MiB 535.82 MB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Fri Jul 27 06:30:25 2018 State : clean Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : 33ea2557:1 (local to host 33ea2557) UUID : 911b186b:6f606710:81ff18c2:d960e9df Events : 19 Number Major Minor RaidDevice State 0 8 2 0 active sync set-A /dev/sda2 1 8 18 1 active sync set-B /dev/sdb2 2 8 34 2 active sync set-A /dev/sdc2 3 8 50 3 active sync set-B /dev/sdd2 4 8 66 4 active sync set-A /dev/sde2 5 8 82 5 active sync set-B /dev/sdf2 /dev/md/data-0: Version : 1.2 Creation Time : Thu Feb 13 19:50:29 2014 Raid Level : raid5 Array Size : 19510833920 (18606.98 GiB 19979.09 GB) Used Dev Size : 3902166784 (3721.40 GiB 3995.82 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Fri Jul 27 07:39:17 2018 State : clean Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : 33ea2557:data-0 (local to host 33ea2557) UUID : 92b2bc5f:cf375e4b:5dcb468f:89d2bd81 Events : 13331486 Number Major Minor RaidDevice State 9 8 51 0 active sync /dev/sdd3 8 8 35 1 active sync /dev/sdc3 7 8 19 2 active sync /dev/sdb3 6 8 3 3 active sync /dev/sda3 5 8 83 4 active sync /dev/sdf3 4 8 67 5 active sync /dev/sde3 /dev/md/data-1: Version : 1.2 Creation Time : Tue Nov 24 12:23:49 2015 Raid Level : raid5 Array Size : 9766874560 (9314.42 GiB 10001.28 GB) Used Dev Size : 1953374912 (1862.88 GiB 2000.26 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Fri Jul 27 07:39:17 2018 State : clean Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : 33ea2557:data-1 (local to host 33ea2557) UUID : 4010d079:27f07b46:33b86d75:a3989442 Events : 15523 Number Major Minor RaidDevice State 0 8 68 0 active sync /dev/sde4 1 8 84 1 active sync /dev/sdf4 8 8 4 2 active sync /dev/sda4 9 8 20 3 active sync /dev/sdb4 6 8 36 4 active sync /dev/sdc4 7 8 52 5 active sync /dev/sdd4 /dev/md/data-2: Version : 1.2 Creation Time : Sun Jul 22 21:41:37 2018 Raid Level : raid5 Array Size : 7811566336 (7449.69 GiB 7999.04 GB) Used Dev Size : 3905783168 (3724.85 GiB 3999.52 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Fri Jul 27 07:39:17 2018 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : 33ea2557:data-2 (local to host 33ea2557) UUID : 1530c06e:fec73963:800bf076:2cf6cf0d Events : 5499 Number Major Minor RaidDevice State 0 8 5 0 active sync /dev/sda5 1 8 37 1 active sync /dev/sdc5 2 8 21 2 active sync /dev/sdb5
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
You have a triple layer data volume with md125, md126 and md127 as the three layers.
That means you've vertically expanded your data volume a couple of times.
From the looks of things you may have started out with 4TB disks, then moved to 6TB disks and now onto 10TB disks?
There are only 3 disks showing in md127, sda, sdb and sdc.
If you look at disk_info.log which disk is the 10TB disk you just added. Is it one of those three disks or a different disk? If so, which disk?
What does your partitions.log look like? Do you see the same partitions for the other 10TB disk as for the 3 disks mentioned above?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
Hi
Thanks. sdd is the one that is just added (disk_info.log below).
Forgive me - I don't know how to interpret partitions.log:
major minor #blocks name 8 0 9766436864 sda 8 1 4194304 sda1 8 2 524288 sda2 8 3 3902297912 sda3 8 4 1953506020 sda4 8 5 3905914280 sda5 8 16 9766436864 sdb 8 17 4194304 sdb1 8 18 524288 sdb2 8 19 3902297912 sdb3 8 20 1953506020 sdb4 8 21 3905914280 sdb5 8 32 9766436864 sdc 8 33 4194304 sdc1 8 34 524288 sdc2 8 35 3902297912 sdc3 8 36 1953506020 sdc4 8 37 3905914280 sdc5 8 48 9766436864 sdd 8 49 4194304 sdd1 8 50 524288 sdd2 8 51 3902297912 sdd3 8 52 1953506020 sdd4 8 64 5860522584 sde 8 65 4194304 sde1 8 66 524288 sde2 8 67 3902297912 sde3 8 68 1953506020 sde4 8 80 5860522584 sdf 8 81 4194304 sdf1 8 82 524288 sdf2 8 83 3902297912 sdf3 8 84 1953506020 sdf4 9 0 4190208 md0 9 1 1569792 md1 9 127 7811566336 md127 9 126 19510833920 md126 9 125 9766874560 md125
Device: sda Controller: 0 Channel: 0 Model: ST10000VN0004-1ZD101 Serial: ZA27GA65 Firmware: SC60 Class: SATA RPM: 7200 Sectors: 19532873728 Pool: data PoolType: RAID 5 PoolState: 1 PoolHostId: 33ea2557 Health data ATA Error Count: 0 Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 End-to-End Errors: 0 Command Timeouts: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 36 Start/Stop Count: 1 Power-On Hours: 179 Power Cycle Count: 1 Load Cycle Count: 165 Device: sdb Controller: 0 Channel: 1 Model: ST10000VN0004-1ZD101 Serial: ZA27GSKS Firmware: SC60 Class: SATA RPM: 7200 Sectors: 19532873728 Pool: data PoolType: RAID 5 PoolState: 1 PoolHostId: 33ea2557 Health data ATA Error Count: 0 Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 End-to-End Errors: 0 Command Timeouts: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 37 Start/Stop Count: 1 Power-On Hours: 82 Power Cycle Count: 1 Load Cycle Count: 134 Device: sdc Controller: 0 Channel: 2 Model: ST10000VN0004-1ZD101 Serial: ZA27FV7C Firmware: SC60 Class: SATA RPM: 7200 Sectors: 19532873728 Pool: data PoolType: RAID 5 PoolState: 1 PoolHostId: 33ea2557 Health data ATA Error Count: 0 Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 End-to-End Errors: 0 Command Timeouts: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 36 Start/Stop Count: 1 Power-On Hours: 126 Power Cycle Count: 1 Load Cycle Count: 187 Device: sdd Controller: 0 Channel: 3 Model: ST10000VN0004-2GS11L Serial: ZJV03DE8 Firmware: SC60 Class: SATA RPM: 7200 Sectors: 19532873728 Pool: data PoolType: RAID 5 PoolState: 1 PoolHostId: 33ea2557 Health data ATA Error Count: 0 Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Command Timeouts: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 38 Start/Stop Count: 9 Power-On Hours: 186 Power Cycle Count: 9 Load Cycle Count: 15 Device: sde Controller: 0 Channel: 4 Model: HGST HDN726060ALE610 Serial: NCG9UA9S Firmware: APGNT517 Class: SATA RPM: 7200 Sectors: 11721045168 Pool: data PoolType: RAID 5 PoolState: 1 PoolHostId: 33ea2557 Health data ATA Error Count: 0 Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 45 Start/Stop Count: 1659 Power-On Hours: 19379 Power Cycle Count: 852 Load Cycle Count: 2320 Device: sdf Controller: 0 Channel: 5 Model: HGST HDN726060ALE610 Serial: NCG9TPDS Firmware: APGNT517 Class: SATA RPM: 7200 Sectors: 11721045168 Pool: data PoolType: RAID 5 PoolState: 1 PoolHostId: 33ea2557 Health data ATA Error Count: 0 Reallocated Sectors: 0 Reallocation Events: 0 Spin Retry Count: 0 Current Pending Sector Count: 0 Uncorrectable Sector Count: 0 Temperature: 43 Start/Stop Count: 1607 Power-On Hours: 19378 Power Cycle Count: 852 Load Cycle Count: 2268
I've responded to your PM.
Thanks
Andy
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
You can see that sdd was a missing a partition 5. I've added that partition and added that new partition to md127. It's currently doing a resync. Once that's complete the data volume should then expand.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
Hi
Thanks for that.
Will that 'restore' the XRAID (as opposed to what appears to be flex-RAID) as well??
Cheers
Andy
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
This doesn't do anything to change X-RAID versus Flex-RAID. If you're using X-RAID that will be indicated on the Volumes tab.
The reason you have multiple RAID layers is because of vertical expansion. It's normal to have this in X-RAID when replacing disks with higher capacity ones.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
Thanks.
That was how I understood it - the fact I had repeatedly expanded vertically could only be XRAID. But my volume screen (see screenshot attachment in first post) showed XRAID as grey not green and listed the 3 RAID volumes. Was this just because it didn't know what to make of the missing partition, whereas presumably for it to 'be' XRAID all space would be expanded into??
Cheers
Andy
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
I suggest that you wait for the volume to be correctly mounted, and see what mode shows up then.
If it does show up as flexraid, you should be able to switch it back to xraid (with no data loss).
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
Hi
Resync'd and sizes are correct. Still shows as FlexRAID and says I cannot swap back as 'volume expanded' - see screenshot.
Logs emailed.
Thanks
Andy
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
Flex-RAID and X-RAID expansion work differently so once you've done expansion using Flex-RAID you can't switch back,
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
How does this limit me for future expansion?? (Ie swapping disks 5 and 6 for 10tb in the future and maybe vertically expanding them all again when capacities are larger)
Thanks
Andy
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
@powellandy1 wrote:
How does this limit me for future expansion?? (Ie swapping disks 5 and 6 for 10tb in the future and maybe vertically expanding them all again when capacities are larger)
Future expansion (including vertical expansion) is still possible, but likely will require manual steps.
You could of course do a factory reset, rebuild the NAS as XRAID, and then restore your files from backup.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
Thanks @StephenB - I could do - it's all backed up on the Ultra6 (although I'm always slightly cautious as that's RAID0 and hence fragile) - i've put half on 4x4TB in the 104 and could put the spare 4x6TB in there and do the other half - but the 104 is so slow!!!
@mdgm-ntgr - could you tell me the commands you used to add the partition then grow into it - as I assume I would need to do similar for drives 5 and 6. I presume then adding a 4th stripe would be much more complicated.
Is there any troubleshooting I can do to help - presumably it's not by design this happened - as stated in first post I simply hotswapped a drive that happened to have an old volume on it - destroyed it - then deliberately rebooted the NAS rather than pick an option to grow/spare etc.. hoping it would then recognise the empty drive and just vertically expand as XRAID automatically.
Tx
Andy
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Adding a 4th 10TB drive to a XRAID array on Pro6 6.9.3 (please reassure me)
Something really odd.
Put discs in new 516 and factory reset. Says new volume is 27TB. Seems to be missing the extra from the 10TB drives.
New thread posted.