NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Stanman130
Apr 03, 2014Guide
OS6 Data Recovery - How to Mount BTRFS Volumes
I recently purchased a ReadyNAS 314 and 5 Seagate ST2000VN000 hard disks (one as a cold spare). I work as a system administrator, so I've been reading up on OS6 before I entrust this new system with m...
Stanman130
Aug 03, 2014Guide
Per mdgm above, the key appears to be getting an updated kernel and set of BTRFS tools. I switched to Fedora Linux v20 x86_64 with kernel 3.15 and that works without modification.
Second OS6 Data Recovery Test
ReadyNAS 314 configuration
OS 6.1.6
Default configuration - one user created
Drives - 4 x Seagate ST2000NV000 disks - 2 Tb (on the approved hardware list)
Volume configuration - FlexRAID in RAID 6 array for data partition (appears as MD127)
Approximately 17 Gb of data copied over - Debian Linux 7.4 AMD64 DVD ISO files (five files) verified with SHA1
One user created - SMB used as the file share protocol
Recovery PC configuration
OS - Fedora Linux 20 x86_64 - default install with customized partitioning - BTRFS
selected as the format, default partitions (/home on the same partition as root)
Ran "Yum update" from the command line after install - Many packages (and the kernel) updated up to 2 Aug 2014.
OS name Fedora 20
Kernel 3.15.7-200.fc20.x86_64 SMP
BTRFS v3.14.2
Motherboard Gigabyte GA-Z77X-UP5TH
i5 CPU 3570
8 Gb RAM
6 SATA ports
Port 0 - Plextor DVD drive
Port 1 - Seagate ST3808110AS - 80 Gb SATA drive for OS install
Port 2 - Drive 1 from ReadyNAS
Port 3 - Drive 2 from ReadyNAS
Port 4 - Drive 3 from ReadyNAS
Port 5 - Drive 4 from ReadyNAS
BIOS set to AHCI and "Legacy" mode (meaning non-UEFI)
Recovering the data at the command line
Followed the recommended commands from the earlier thread entry (all commands executed as root at the command line):
yum update (updates packages and kernel to latest online version)
*** NOTE: No need to install mdadm or btrfs-progs - since BTRFS was chosen as the install volume format, these were already in place. Running the mdadm --assemble --scan command was also not needed - the system found the array at boot up automatically.
cat /proc/mdstat (shows the connected arrays and their properties)
mount -t btrfs -o ro /dev/md127 /mnt (mounts the data volume at /mnt and allows access)
NOTE: On Fedora, the tools are actually named "btrfs-progs" and the version was "btrfs-progs 3.14.2-3.fc20.x86_64"
Data recovery
Once the volume was mounted, I was able to move to the /mnt/Documents folder where the test ISO files were stored. The files were moved to the folder /home/<username>/recov.
The files were checked again by generating an SHA1 sum and checking it against the official SHA1 sum in the distro. The SHA1 signature matched showing that the data was not corrupted. This was only 17 Gb of test data, but it was just a proof of concept.
Extra Test
There was a question up-thread about whether the mounting order of the hard disks makes a difference, so I decided to test that (since everything was ready anyway).
Here's what I found:
Out of Order Test 1
Motherboard SATA 2 - ReadyNAS Disk 3
Motherboard SATA 3 - ReadyNAS Disk 1
Motherboard SATA 4 - ReadyNAS Disk 4
Motherboard SATA 5 - ReadyNAS Disk 2
Data volume appears as "md126" this time when cat /proc/mdstat is run. It is mounted the same way:
mount -t btrfs -o ro /dev/md126 /mnt
Some sample SHA1 sums verified correctly.
Out of Order Test 2
Motherboard SATA 2 - ReadyNAS Disk 4
Motherboard SATA 3 - ReadyNAS Disk 1
Motherboard SATA 4 - ReadyNAS Disk 3
Motherboard SATA 5 - ReadyNAS Disk 2
Data volume appears as "md127" this time when cat /proc/mdstat is run. Mounted the same way as above and again the SHA1 sums verified correctly.
Can someone with more knowlege please explain the components of the cat /proc/mdstat output? It would be nice to know what I'm looking at.
From the "Out of Order Test 2", I see the following:
Personalities : [raid6] [raid5] [raid4] [raid1]
md125 : active (auto-read-only) raid1 sdd1[2] sdb1[3] sde1[1] sdc1[0]
4192192 blocks super 1.2 [4/4] [UUUU]
md126 : active (auto-read-only) raid6 sdd2[2] sde2[1] sdc2[0] sdb2[3]
1047936 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
md127 : active (auto-read-only) raid6 sde3[1] sdd3[2] sdc3[0] sdb3[3]
3897329408 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
I mounted up md125, and that appears to be the OS6 partition, with no data available. Just the OS files but this might be crucial for troubleshooting - the log files would be available for analysis. The partition md126 would not mount - not sure what that is, maybe the swap file?
Why do the md names change? Sometimes the data volume is "md127" and sometimes it's "md126".
Just out of curiousity, I thought I'd do some "less than ideal" data recovery tests.
Single Hard Disk Missing from Array Test 1
For this test, I put the drive cables back in the first test configuration:
Mb SATA 2 - RN Disk 1
Mb SATA 3 - RN Disk 2
Mb SATA 4 - RN Disk 3
Mb SATA 5 - RN Disk 4
I tested to make sure the data was available (strangely, this time the data was on "md126"). It was accessible.
Then I shut down the machine and disconnected the Disk 1 data cable (simulated failed drive) and started up again.
This time the /proc/mdstat command gives this result:
Personalities :
md125 : inactive sdc1[2] (S) sdd1[3] (S) sdb1[1] (S)
12576768 blocks super 1.2
md126 : inactive sdd3[3] (S) sdc3[2] (S) sdb3[1] (S)
5845994613 blocks super 1.2
md127 : inactive sdb2[1] (S) sdd2[3] (S) sdc2[2] (S)
1572096 blocks super 1.2
unused devices: <none>
I poked around with mdadm, but other than getting a little information, I'm not sure how to rebuild this array or force it to mount.
I'm open to ideas on how to get this array to be visible and accessible with Disk 1 missing. What's the best way to force the array to be active and mount up with a disk missing? I'd like to test the scenario where a spare disk is not available, so the array needs to mount up using 3 disks.
Thanks for all the help, it's going great so far!
Stan
Second OS6 Data Recovery Test
ReadyNAS 314 configuration
OS 6.1.6
Default configuration - one user created
Drives - 4 x Seagate ST2000NV000 disks - 2 Tb (on the approved hardware list)
Volume configuration - FlexRAID in RAID 6 array for data partition (appears as MD127)
Approximately 17 Gb of data copied over - Debian Linux 7.4 AMD64 DVD ISO files (five files) verified with SHA1
One user created - SMB used as the file share protocol
Recovery PC configuration
OS - Fedora Linux 20 x86_64 - default install with customized partitioning - BTRFS
selected as the format, default partitions (/home on the same partition as root)
Ran "Yum update" from the command line after install - Many packages (and the kernel) updated up to 2 Aug 2014.
OS name Fedora 20
Kernel 3.15.7-200.fc20.x86_64 SMP
BTRFS v3.14.2
Motherboard Gigabyte GA-Z77X-UP5TH
i5 CPU 3570
8 Gb RAM
6 SATA ports
Port 0 - Plextor DVD drive
Port 1 - Seagate ST3808110AS - 80 Gb SATA drive for OS install
Port 2 - Drive 1 from ReadyNAS
Port 3 - Drive 2 from ReadyNAS
Port 4 - Drive 3 from ReadyNAS
Port 5 - Drive 4 from ReadyNAS
BIOS set to AHCI and "Legacy" mode (meaning non-UEFI)
Recovering the data at the command line
Followed the recommended commands from the earlier thread entry (all commands executed as root at the command line):
yum update (updates packages and kernel to latest online version)
*** NOTE: No need to install mdadm or btrfs-progs - since BTRFS was chosen as the install volume format, these were already in place. Running the mdadm --assemble --scan command was also not needed - the system found the array at boot up automatically.
cat /proc/mdstat (shows the connected arrays and their properties)
mount -t btrfs -o ro /dev/md127 /mnt (mounts the data volume at /mnt and allows access)
NOTE: On Fedora, the tools are actually named "btrfs-progs" and the version was "btrfs-progs 3.14.2-3.fc20.x86_64"
Data recovery
Once the volume was mounted, I was able to move to the /mnt/Documents folder where the test ISO files were stored. The files were moved to the folder /home/<username>/recov.
The files were checked again by generating an SHA1 sum and checking it against the official SHA1 sum in the distro. The SHA1 signature matched showing that the data was not corrupted. This was only 17 Gb of test data, but it was just a proof of concept.
Extra Test
There was a question up-thread about whether the mounting order of the hard disks makes a difference, so I decided to test that (since everything was ready anyway).
Here's what I found:
Out of Order Test 1
Motherboard SATA 2 - ReadyNAS Disk 3
Motherboard SATA 3 - ReadyNAS Disk 1
Motherboard SATA 4 - ReadyNAS Disk 4
Motherboard SATA 5 - ReadyNAS Disk 2
Data volume appears as "md126" this time when cat /proc/mdstat is run. It is mounted the same way:
mount -t btrfs -o ro /dev/md126 /mnt
Some sample SHA1 sums verified correctly.
Out of Order Test 2
Motherboard SATA 2 - ReadyNAS Disk 4
Motherboard SATA 3 - ReadyNAS Disk 1
Motherboard SATA 4 - ReadyNAS Disk 3
Motherboard SATA 5 - ReadyNAS Disk 2
Data volume appears as "md127" this time when cat /proc/mdstat is run. Mounted the same way as above and again the SHA1 sums verified correctly.
Can someone with more knowlege please explain the components of the cat /proc/mdstat output? It would be nice to know what I'm looking at.
From the "Out of Order Test 2", I see the following:
Personalities : [raid6] [raid5] [raid4] [raid1]
md125 : active (auto-read-only) raid1 sdd1[2] sdb1[3] sde1[1] sdc1[0]
4192192 blocks super 1.2 [4/4] [UUUU]
md126 : active (auto-read-only) raid6 sdd2[2] sde2[1] sdc2[0] sdb2[3]
1047936 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
md127 : active (auto-read-only) raid6 sde3[1] sdd3[2] sdc3[0] sdb3[3]
3897329408 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
I mounted up md125, and that appears to be the OS6 partition, with no data available. Just the OS files but this might be crucial for troubleshooting - the log files would be available for analysis. The partition md126 would not mount - not sure what that is, maybe the swap file?
Why do the md names change? Sometimes the data volume is "md127" and sometimes it's "md126".
Just out of curiousity, I thought I'd do some "less than ideal" data recovery tests.
Single Hard Disk Missing from Array Test 1
For this test, I put the drive cables back in the first test configuration:
Mb SATA 2 - RN Disk 1
Mb SATA 3 - RN Disk 2
Mb SATA 4 - RN Disk 3
Mb SATA 5 - RN Disk 4
I tested to make sure the data was available (strangely, this time the data was on "md126"). It was accessible.
Then I shut down the machine and disconnected the Disk 1 data cable (simulated failed drive) and started up again.
This time the /proc/mdstat command gives this result:
Personalities :
md125 : inactive sdc1[2] (S) sdd1[3] (S) sdb1[1] (S)
12576768 blocks super 1.2
md126 : inactive sdd3[3] (S) sdc3[2] (S) sdb3[1] (S)
5845994613 blocks super 1.2
md127 : inactive sdb2[1] (S) sdd2[3] (S) sdc2[2] (S)
1572096 blocks super 1.2
unused devices: <none>
I poked around with mdadm, but other than getting a little information, I'm not sure how to rebuild this array or force it to mount.
I'm open to ideas on how to get this array to be visible and accessible with Disk 1 missing. What's the best way to force the array to be active and mount up with a disk missing? I'd like to test the scenario where a spare disk is not available, so the array needs to mount up using 3 disks.
Thanks for all the help, it's going great so far!
Stan
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!