NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
jlficken
Oct 16, 2019Aspirant
4 disk RAID 10 array....how do I tell which disks are striped?
I'm trying to figure out which disks are paired together as I'm moving data to another system and am going to canabalize the array for now. Everything is backed up so if something happens I'll be...
- Oct 17, 2019
jlficken wrote:Number Major Minor RaidDevice State 0 8 3 0 active sync set-A /dev/sda3 - 0 0 1 removed 2 8 35 2 active sync set-A /dev/sdc3 3 8 51 3 active sync set-B /dev/sdd3Would it seem logical that I can remove either disk 0 or disk 2 safely?
Yes, that is the case. It looks like you already removed disk sdb (bay 2) - which was mirrored by sdd (bay 4).
StephenB
Oct 17, 2019Guru - Experienced User
I've never used RAID 10, so I'm not 100% sure. But perhaps download the log zip file, and look at mdstat.log.
jlficken
Oct 17, 2019Aspirant
Thanks!!!
I see this in that file:
Consistency Policy : unknown
Name : 7c6e3906:MAIN-0 (local to host 7c6e3906)
UUID : fd0716df:099c5648:d351a440:b82e32f5
Events : 13588
Number Major Minor RaidDevice State
0 8 3 0 active sync set-A /dev/sda3
- 0 0 1 removed
2 8 35 2 active sync set-A /dev/sdc3
3 8 51 3 active sync set-B /dev/sdd3Would it seem logical that I can remove either disk 0 or disk 2 safely?
- jlfickenOct 17, 2019Aspirant
Here's the full contents for that volume:
/dev/md/MAIN-0: Version : 1.2 Creation Time : Sat Mar 18 15:51:34 2017 Raid Level : raid10 Array Size : 15618353664 (14894.82 GiB 15993.19 GB) Used Dev Size : 7809176832 (7447.41 GiB 7996.60 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Oct 17 12:35:48 2019 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : near=2 Chunk Size : 64K Consistency Policy : unknown Name : 7c6e3906:MAIN-0 (local to host 7c6e3906) UUID : fd0716df:099c5648:d351a440:b82e32f5 Events : 13588 Number Major Minor RaidDevice State 0 8 3 0 active sync set-A /dev/sda3 - 0 0 1 removed 2 8 35 2 active sync set-A /dev/sdc3 3 8 51 3 active sync set-B /dev/sdd3 1 8 19 - faulty - StephenBOct 17, 2019Guru - Experienced User
jlficken wrote:Number Major Minor RaidDevice State 0 8 3 0 active sync set-A /dev/sda3 - 0 0 1 removed 2 8 35 2 active sync set-A /dev/sdc3 3 8 51 3 active sync set-B /dev/sdd3Would it seem logical that I can remove either disk 0 or disk 2 safely?
Yes, that is the case. It looks like you already removed disk sdb (bay 2) - which was mirrored by sdd (bay 4).
- jlfickenOct 18, 2019Aspirant
That is correct.
I knew I could remove 1 of the 4 drives without any issues (besides the array being "Degraded"), however, I wasn't sure how to know which drive to remove next so I came here.
They are all HGST HE8 UltraStar drives and I have a complete backup on another ReadyNAS so I'm not too concerned about running it degraded for a little bit while I am finishing the new NAS. The data from this volume will be moved over yet this morning via rysnc and then I can move on to the next volume.
Thanks again!
- StephenBOct 18, 2019Guru - Experienced User
jlficken wrote:
I knew I could remove 1 of the 4 drives without any issues (besides the array being "Degraded"), however, I wasn't sure how to know which drive to remove next so I came here.
I'm glad I could help.
JohnCM_S: There is nothing about this for any of the more advanced RAID modes in the manuals. It'd be good if that were added (and a kb article on how to determine the RAID structure for RAID-10, 50, and 60 from mdstat.log would be really helpful). Is that something you can request?
Related Content
- Dec 25, 2018Retired_Member
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!