NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
varunmsh
Dec 24, 2019Aspirant
RN104 raid volume recovery
I understand this could be a duplicate post but I tried looking several other posts on Netgear community and could not find a solution so I am posting my problem here. I had a NAS (RN104) with 4T...
varunmsh
Dec 25, 2019Aspirant
Hi Stephen,
Thanks for the suggestion. I think I get your point and I will be able to mount the raid volume by the command given by you. There is only thing I am stuck is that the required raid is inactive currently. I am very sure that I will be able to mount it once this is active. Please see below output for reference.
[root@localhost-live ~]# cat /proc/mdstat
Personalities : [raid1]
md125 : inactive sda3[2](S)
3902168864 blocks super 1.2
md126 : inactive sda2[3](S)
522240 blocks super 1.2
md127 : active (auto-read-only) raid1 sda1[3]
4190208 blocks super 1.2 [3/1] [__U]root@localhost-live ~]# sfdisk -l /dev/sda Disk /dev/sda: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors Disk model: ST4000DM000-1F21 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 9BD3A16D-D45B-4B7D-A278-43F09DFDD619 Device Start End Sectors Size Type /dev/sda1 64 8388671 8388608 4G Linux RAID /dev/sda2 8388672 9437247 1048576 512M Linux RAID /dev/sda3 9437248 7814037119 7804599872 3.6T Linux RAID
I tried recreating the array by stop and reassemble but get below error. I tried foce option also but get the same error. Please refer to below output.
root@localhost-live ~]# cat /proc/mdstat
Personalities : [raid1]
md125 : inactive sda3[2](S)
3902168864 blocks super 1.2
md126 : inactive sda2[3](S)
522240 blocks super 1.2
md127 : active (auto-read-only) raid1 sda1[3]
4190208 blocks super 1.2 [3/1] [__U]
unused devices: <none>
[root@localhost-live ~]# mdadm --stop /dev/md125
mdadm: stopped /dev/md125
root@localhost-live ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : inactive sda2[3](S)
522240 blocks super 1.2
md127 : active (auto-read-only) raid1 sda1[3]
4190208 blocks super 1.2 [3/1] [__U]
unused devices: <none>
[root@localhost-live ~]# mdadm --assemble /dev/md125 /dev/sda3 -v
mdadm: looking for devices for /dev/md125
mdadm: /dev/sda3 is identified as a member of /dev/md125, slot 2.
mdadm: no uptodate device for slot 0 of /dev/md125
mdadm: no uptodate device for slot 1 of /dev/md125
mdadm: added /dev/sda3 to /dev/md125 as 2
mdadm: /dev/md125 assembled from 1 drive - not enough to start the array.
[root@localhost-live ~]# mdadm --assemble /dev/md125 /dev/sda3 -v --force
mdadm: looking for devices for /dev/md125
mdadm: /dev/sda3 is identified as a member of /dev/md125, slot 2.
mdadm: no uptodate device for slot 0 of /dev/md125
mdadm: no uptodate device for slot 1 of /dev/md125
mdadm: added /dev/sda3 to /dev/md125 as 2
mdadm: /dev/md125 assembled from 1 drive - not enough to start the array.It appears that based on the raid attributes, system is looking for 3 drives to start the array. Is it possible to tweak the properties and fool the system to start the array with one drive only. Or is there any other way around to activate it with one drive available. Please suggest.
[root@localhost-live ~]# mdadm --examine /dev/sda3
/dev/sda3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x44
Array UUID : 6a290f1c:4c925fb6:272e64d6:881a66eb
Name : 2fe57776:data-0
Creation Time : Sun Dec 22 03:06:23 2019
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 7804337728 (3721.40 GiB 3995.82 GB)
Array Size : 7804337664 (7442.80 GiB 7991.64 GB)
Used Dev Size : 7804337664 (3721.40 GiB 3995.82 GB)
Data Offset : 262144 sectors
New Offset : 261888 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 86254d78:c5d7c994:11612b9e:c161f30b
Reshape pos'n : 2116736 (2.02 GiB 2.17 GB)
Delta Devices : 1 (2->3)
Update Time : Mon Dec 23 08:47:21 2019
Bad Block Log : 512 entries available at offset 264 sectors
Checksum : 332f4392 - correct
Events : 175
Layout : left-symmetric
Chunk Size : 64K
Device Role : Active device 2
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
[root@localhost-live ~]# mdadm --assemble --scan -v
mdadm: looking for devices for further assembly
mdadm: no recogniseable superblock on /dev/md/2fe57776:0
mdadm: no recogniseable superblock on /dev/dm-1
mdadm: no recogniseable superblock on /dev/dm-0
mdadm: no recogniseable superblock on /dev/loop2
mdadm: no recogniseable superblock on /dev/loop1
mdadm: no recogniseable superblock on /dev/loop0
mdadm: no recogniseable superblock on /dev/sdb3
mdadm: Cannot assemble mbr metadata on /dev/sdb2
mdadm: Cannot assemble mbr metadata on /dev/sdb1
mdadm: Cannot assemble mbr metadata on /dev/sdb
mdadm: /dev/sda3 is busy - skipping
mdadm: /dev/sda2 is busy - skipping
mdadm: /dev/sda1 is busy - skipping
mdadm: Cannot assemble mbr metadata on /dev/sda
mdadm: No arrays found in config file or automaticallyStephenB
Dec 26, 2019Guru - Experienced User
varunmsh wrote:
Hi Stephen,
Thanks for the suggestion. I think I get your point and I will be able to mount the raid volume by the command given by you. There is only thing I am stuck is that the required raid is inactive currently. I am very sure that I will be able to mount it once this is active. Please see below output for reference.
[root@localhost-live ~]# cat /proc/mdstat Personalities : [raid1] md125 : inactive sda3[2](S) 3902168864 blocks super 1.2 md126 : inactive sda2[3](S) 522240 blocks super 1.2 md127 : active (auto-read-only) raid1 sda1[3] 4190208 blocks super 1.2 [3/1] [__U]root@localhost-live ~]# sfdisk -l /dev/sda Disk /dev/sda: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors Disk model: ST4000DM000-1F21 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 9BD3A16D-D45B-4B7D-A278-43F09DFDD619 Device Start End Sectors Size Type /dev/sda1 64 8388671 8388608 4G Linux RAID /dev/sda2 8388672 9437247 1048576 512M Linux RAID /dev/sda3 9437248 7814037119 7804599872 3.6T Linux RAID
md125 is your data array - not the usual md127. md127 is the OS partition (which is 4 GB) - that is normally md0, and md126 is the swap partition - normally md1.
varunmsh wrote:
[root@localhost-live ~]# mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.2 Feature Map : 0x44 Array UUID : 6a290f1c:4c925fb6:272e64d6:881a66eb Name : 2fe57776:data-0 Creation Time : Sun Dec 22 03:06:23 2019 Raid Level : raid5 Raid Devices : 3
Looks like the system did at least partially convert the array to RAID-5. So you'd need at least two drives to mount it.
Note you could also use paid support (via my.netgear.com), and see if they can mount the array for you remotely (with the disks in the NAS).
- varunmshDec 26, 2019Aspirant
Thanks, I figured that it would be md125, I tried to recreate it already.
Please allow me some time to explore at my end. Will update you shortly or close the thread with my findings.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!