NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
varunmsh
Dec 24, 2019Aspirant
RN104 raid volume recovery
I understand this could be a duplicate post but I tried looking several other posts on Netgear community and could not find a solution so I am posting my problem here. I had a NAS (RN104) with 4T...
Marc_V
Dec 25, 2019NETGEAR Employee Retired
So your previously RAID1 was expanded to RAID5 but did not complete right? since Flex-RAID was used and it was manually added. Then there was a reset if I understand correctly.
If you have a Windows PC trying out ReCLAIMe to access the data would be suggested. If you are using Linux then you should be able to mount the RAID using CLI.
Here are some Community post that might help
https://ubuntuforums.org/showthread.php?t=2265348
https://unix.stackexchange.com/questions/300122/mount-a-single-hard-disk-that-was-part-of-raid-1
DDRescue might also be of use. Haven't use much of Linux to be honest but I'm sure other members will be able to help as well.
Hope this helps!
Regards
StephenB
Dec 25, 2019Guru - Experienced User
Marc_V wrote:
https://ubuntuforums.org/showthread.php?t=2265348
These won't work with an OS-6 NAS.
Something like this should work:
# apt-get update
# apt-get install mdadm btrfs-tools
# mdadm --assemble --scan
# cat /proc/mdstat
# mount -t btrfs -o ro /dev/md127 /mnt
Though if the system were shut down in the middle of converting RAID-1 to RAID-5 you likely will need data recovery (such as ReclaiMe).
- varunmshDec 25, 2019Aspirant
Hi Stephen,
Thanks for the suuggestion. I think I got your point. The only problem I am stuck now is the md volumes are in "inactive" state right now. I am sure that I will be able to mount these with your command once these are active. See below output for reference.
----------------------------------------
root@localhost-live ~]# cat /proc/mdstat
Personalities :
md125 : inactive sdb1[0](S)
4190208 blocks super 1.2
md126 : inactive sdb3[0](S)
3902166840 blocks super 1.2
md127 : inactive sdb2[2](S)
523264 blocks super 1.2----------------------------------------
I tried stopping and then reassemble but get below error.
----------------------------------------
[root@localhost-live ~]# mdadm --stop md126
mdadm: stopped md126
[root@localhost-live ~]# cat /proc/mdstat
Personalities :
md125 : inactive sdb1[0](S)
4190208 blocks super 1.2
md127 : inactive sdb2[2](S)
523264 blocks super 1.2
unused devices: <none>
[root@localhost-live ~]# mdadm --assemble /dev/md126 /dev/sdb3 -v
mdadm: looking for devices for /dev/md126
mdadm: /dev/sdb3 is identified as a member of /dev/md126, slot 0.
mdadm: no uptodate device for slot 1 of /dev/md126
mdadm: no uptodate device for slot 2 of /dev/md126
mdadm: no uptodate device for slot 3 of /dev/md126
mdadm: added /dev/sdb3 to /dev/md126 as 0
mdadm: /dev/md126 assembled from 1 drive - not enough to start the array.----------------------------------------
It looks like from the output below that it require 4 devices to start the array. I tried force option also but still get the same error. Is it possible if we can tweak the md definition and fool it to consider one device only. Or may be some other way around to make it active.
----------------------------------------
[root@localhost-live ~]# mdadm --examine /dev/sdb3
/dev/sdb3:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x44
Array UUID : 946f1daa:25266b75:83ccd4cc:c9f1d2b7
Name : 2fe57776:data-0
Creation Time : Sat Aug 20 15:07:12 2016
Raid Level : raid5
Raid Devices : 4Avail Dev Size : 7804333680 (3721.40 GiB 3995.82 GB)
Array Size : 11706500352 (11164.19 GiB 11987.46 GB)
Used Dev Size : 7804333568 (3721.40 GiB 3995.82 GB)
Data Offset : 262144 sectors
New Offset : 261760 sectors
Super Offset : 8 sectors
State : clean
Device UUID : d6338322:5447e981:449fb043:53008fefReshape pos'n : 66118464 (63.06 GiB 67.71 GB)
Delta Devices : 2 (2->4)Update Time : Sun Dec 22 01:36:17 2019
Checksum : 64f1ca9d - correct
Events : 373Layout : left-symmetric
Chunk Size : 64KDevice Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)----------------------------------------
- varunmshDec 25, 2019Aspirant
Hi Stephen,
Thanks for the suggestion. I think I get your point and I will be able to mount the raid volume by the command given by you. There is only thing I am stuck is that the required raid is inactive currently. I am very sure that I will be able to mount it once this is active. Please see below output for reference.
[root@localhost-live ~]# cat /proc/mdstat Personalities : [raid1] md125 : inactive sda3[2](S) 3902168864 blocks super 1.2 md126 : inactive sda2[3](S) 522240 blocks super 1.2 md127 : active (auto-read-only) raid1 sda1[3] 4190208 blocks super 1.2 [3/1] [__U]root@localhost-live ~]# sfdisk -l /dev/sda Disk /dev/sda: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors Disk model: ST4000DM000-1F21 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 9BD3A16D-D45B-4B7D-A278-43F09DFDD619 Device Start End Sectors Size Type /dev/sda1 64 8388671 8388608 4G Linux RAID /dev/sda2 8388672 9437247 1048576 512M Linux RAID /dev/sda3 9437248 7814037119 7804599872 3.6T Linux RAID
I tried recreating the array by stop and reassemble but get below error. I tried foce option also but get the same error. Please refer to below output.
root@localhost-live ~]# cat /proc/mdstat Personalities : [raid1] md125 : inactive sda3[2](S) 3902168864 blocks super 1.2 md126 : inactive sda2[3](S) 522240 blocks super 1.2 md127 : active (auto-read-only) raid1 sda1[3] 4190208 blocks super 1.2 [3/1] [__U] unused devices: <none> [root@localhost-live ~]# mdadm --stop /dev/md125 mdadm: stopped /dev/md125 root@localhost-live ~]# cat /proc/mdstat Personalities : [raid1] md126 : inactive sda2[3](S) 522240 blocks super 1.2 md127 : active (auto-read-only) raid1 sda1[3] 4190208 blocks super 1.2 [3/1] [__U] unused devices: <none> [root@localhost-live ~]# mdadm --assemble /dev/md125 /dev/sda3 -v mdadm: looking for devices for /dev/md125 mdadm: /dev/sda3 is identified as a member of /dev/md125, slot 2. mdadm: no uptodate device for slot 0 of /dev/md125 mdadm: no uptodate device for slot 1 of /dev/md125 mdadm: added /dev/sda3 to /dev/md125 as 2 mdadm: /dev/md125 assembled from 1 drive - not enough to start the array. [root@localhost-live ~]# mdadm --assemble /dev/md125 /dev/sda3 -v --force mdadm: looking for devices for /dev/md125 mdadm: /dev/sda3 is identified as a member of /dev/md125, slot 2. mdadm: no uptodate device for slot 0 of /dev/md125 mdadm: no uptodate device for slot 1 of /dev/md125 mdadm: added /dev/sda3 to /dev/md125 as 2 mdadm: /dev/md125 assembled from 1 drive - not enough to start the array.It appears that based on the raid attributes, system is looking for 3 drives to start the array. Is it possible to tweak the properties and fool the system to start the array with one drive only. Or is there any other way around to activate it with one drive available. Please suggest.
[root@localhost-live ~]# mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.2 Feature Map : 0x44 Array UUID : 6a290f1c:4c925fb6:272e64d6:881a66eb Name : 2fe57776:data-0 Creation Time : Sun Dec 22 03:06:23 2019 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 7804337728 (3721.40 GiB 3995.82 GB) Array Size : 7804337664 (7442.80 GiB 7991.64 GB) Used Dev Size : 7804337664 (3721.40 GiB 3995.82 GB) Data Offset : 262144 sectors New Offset : 261888 sectors Super Offset : 8 sectors State : clean Device UUID : 86254d78:c5d7c994:11612b9e:c161f30b Reshape pos'n : 2116736 (2.02 GiB 2.17 GB) Delta Devices : 1 (2->3) Update Time : Mon Dec 23 08:47:21 2019 Bad Block Log : 512 entries available at offset 264 sectors Checksum : 332f4392 - correct Events : 175 Layout : left-symmetric Chunk Size : 64K Device Role : Active device 2 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing) [root@localhost-live ~]# mdadm --assemble --scan -v mdadm: looking for devices for further assembly mdadm: no recogniseable superblock on /dev/md/2fe57776:0 mdadm: no recogniseable superblock on /dev/dm-1 mdadm: no recogniseable superblock on /dev/dm-0 mdadm: no recogniseable superblock on /dev/loop2 mdadm: no recogniseable superblock on /dev/loop1 mdadm: no recogniseable superblock on /dev/loop0 mdadm: no recogniseable superblock on /dev/sdb3 mdadm: Cannot assemble mbr metadata on /dev/sdb2 mdadm: Cannot assemble mbr metadata on /dev/sdb1 mdadm: Cannot assemble mbr metadata on /dev/sdb mdadm: /dev/sda3 is busy - skipping mdadm: /dev/sda2 is busy - skipping mdadm: /dev/sda1 is busy - skipping mdadm: Cannot assemble mbr metadata on /dev/sda mdadm: No arrays found in config file or automatically- StephenBDec 26, 2019Guru - Experienced User
varunmsh wrote:
Hi Stephen,
Thanks for the suggestion. I think I get your point and I will be able to mount the raid volume by the command given by you. There is only thing I am stuck is that the required raid is inactive currently. I am very sure that I will be able to mount it once this is active. Please see below output for reference.
[root@localhost-live ~]# cat /proc/mdstat Personalities : [raid1] md125 : inactive sda3[2](S) 3902168864 blocks super 1.2 md126 : inactive sda2[3](S) 522240 blocks super 1.2 md127 : active (auto-read-only) raid1 sda1[3] 4190208 blocks super 1.2 [3/1] [__U]root@localhost-live ~]# sfdisk -l /dev/sda Disk /dev/sda: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors Disk model: ST4000DM000-1F21 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 9BD3A16D-D45B-4B7D-A278-43F09DFDD619 Device Start End Sectors Size Type /dev/sda1 64 8388671 8388608 4G Linux RAID /dev/sda2 8388672 9437247 1048576 512M Linux RAID /dev/sda3 9437248 7814037119 7804599872 3.6T Linux RAID
md125 is your data array - not the usual md127. md127 is the OS partition (which is 4 GB) - that is normally md0, and md126 is the swap partition - normally md1.
varunmsh wrote:
[root@localhost-live ~]# mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.2 Feature Map : 0x44 Array UUID : 6a290f1c:4c925fb6:272e64d6:881a66eb Name : 2fe57776:data-0 Creation Time : Sun Dec 22 03:06:23 2019 Raid Level : raid5 Raid Devices : 3Looks like the system did at least partially convert the array to RAID-5. So you'd need at least two drives to mount it.
Note you could also use paid support (via my.netgear.com), and see if they can mount the array for you remotely (with the disks in the NAS).
- varunmshDec 26, 2019Aspirant
Thanks, I figured that it would be md125, I tried to recreate it already.
Please allow me some time to explore at my end. Will update you shortly or close the thread with my findings.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!