NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Stanman130
Apr 03, 2014Guide
OS6 Data Recovery - How to Mount BTRFS Volumes
I recently purchased a ReadyNAS 314 and 5 Seagate ST2000VN000 hard disks (one as a cold spare). I work as a system administrator, so I've been reading up on OS6 before I entrust this new system with m...
Kimera
Jul 30, 2014Guide
I tested the mounting procedure detailed in the previous post on the two disks Array grabbed from my ReadyNAS RN102 (assembled with two Toshiba DT01ACA100) which is running ReadyOS 6.1.9 RC8.
I used a Fedora 20 x64 XFCE Live Image not yum updated (it's exactly the 03.07.2014 respin, so quite updated yet) and below my findings:
Software components:
Connected disks status:
mdadm status (initially my array wasn't degraded, it was and it is in good shape):
BTRFS status:
mdadm assemble resulted in this message:
Mounting /dev/md127 (data partition) looks good:
And here I am (as you can see my array was and is basically empty):
Mount point (I show the /dev/md127 only) status:
Few dmseg grep messages about BTRFS and md:
Status of /dev/mdx (x=127, 126 and 125) in detail:
After trying I'm now quite confident about BTRFS data recovery option using an (updated) Linux box: the only test we should do now could be to test a degraded array.
I used a Fedora 20 x64 XFCE Live Image not yum updated (it's exactly the 03.07.2014 respin, so quite updated yet) and below my findings:
Software components:
[root@localhost ~]# rpm -q btrfs-progs
btrfs-progs-3.14.2-3.fc20.x86_64
[root@localhost ~]# btrfs version
Btrfs v3.14.2
[root@localhost ~]# uname -ar
Linux localhost 3.14.9-200.fc20.x86_64 #1 SMP Thu Jun 26 21:40:51 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Connected disks status:
[root@localhost ~]# fdisk -l
Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 328E1A16-CB49-47FA-9421-D9608DCA2752
Device Start End Size Type
/dev/sda1 64 8388671 4G Linux RAID
/dev/sda2 8388672 9437247 512M Linux RAID
/dev/sda3 9437248 1953521072 927G Linux RAID
Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B5EEA0F0-F7CD-4E9F-891F-2F1497D59651
Device Start End Size Type
/dev/sdb1 64 8388671 4G Linux RAID
/dev/sdb2 8388672 9437247 512M Linux RAID
/dev/sdb3 9437248 1953521072 927G Linux RAID
Disk /dev/md127: 926.9 GiB, 995236642816 bytes, 1943821568 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/md126: 4 GiB, 4290772992 bytes, 8380416 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/md125: 511.4 MiB, 536281088 bytes, 1047424 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
mdadm status (initially my array wasn't degraded, it was and it is in good shape):
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md125 : active (auto-read-only) raid1 sdb2[0] sda2[1]
523712 blocks super 1.2 [2/2] [UU]
md126 : active (auto-read-only) raid1 sdb1[0] sda1[1]
4190208 blocks super 1.2 [2/2] [UU]
md127 : active (auto-read-only) raid1 sdb3[0] sda3[1]
971910784 blocks super 1.2 [2/2] [UU]
bitmap: 0/8 pages [0KB], 65536KB chunk
unused devices: <none>
BTRFS status:
[root@localhost ~]# btrfs fi label /dev/md127
0e35a911:data
mdadm assemble resulted in this message:
[root@localhost ~]# mdadm --assemble --scan
mdadm: No arrays found in config file or automatically
Mounting /dev/md127 (data partition) looks good:
[root@localhost ~]# mount -t btrfs -o ro /dev/md127 /mnt
And here I am (as you can see my array was and is basically empty):
[root@localhost ~]# df -h|grep md127
/dev/md127 927G 3.1M 925G 1% /mnt
[root@localhost ~]# cd /mnt
[root@localhost mnt]# ls -lah
total 36K
drwxr-xr-x. 1 root root 124 Jul 28 11:36 .
drwxr-xr-x. 18 root root 4.0K Jul 30 03:54 ..
drwxrwxrwx. 1 root root 64 Jul 28 11:04 .apps
drwxrwxrwx+ 1 nobody nobody 0 Jul 16 11:43 Backup
drwxrwxrwx+ 1 nobody nobody 0 Jul 16 11:43 Documents
drwxr-xr-x. 1 98 98 10 Jul 16 11:04 home
drwxrwxrwx+ 1 nobody nobody 0 Jul 16 11:43 Music
drwxrwxrwx+ 1 nobody nobody 0 Jul 16 11:43 Pictures
drwxr-xr-x. 1 root root 36 Jul 28 11:36 .purge
drwxr-xr-x. 1 root root 68 Jul 28 11:36 ._share
drwxr-xr-x. 1 root root 0 Jul 16 11:04 .vault
drwxrwxrwx+ 1 nobody nobody 0 Jul 16 11:43 Videos
Mount point (I show the /dev/md127 only) status:
[root@localhost ~]# mount -l|grep md127
/dev/md127 on /mnt type btrfs (ro,relatime,seclabel,space_cache) [0e35a911:data]
Few dmseg grep messages about BTRFS and md:
Jul 30 04:03:57 localhost kernel: [ 719.673597] BTRFS: device label 0e35a911:data devid 1 transid 76 /dev/md127
Jul 30 04:03:57 localhost kernel: BTRFS: device label 0e35a911:data devid 1 transid 76 /dev/md127
Jul 30 04:03:57 localhost kernel: BTRFS info (device md127): disk space caching is enabled
Jul 30 04:03:57 localhost kernel: [ 719.674570] BTRFS info (device md127): disk space caching is enabled
[root@localhost Documents]# dmesg|grep md
[ 56.422296] md: bind<sda3>
[ 56.432502] md: bind<sdb3>
[ 58.602410] md: raid1 personality registered for level 1
[ 58.603526] md/raid1:md127: active with 2 out of 2 mirrors
[ 58.603650] created bitmap (8 pages) for device md127
[ 58.603862] md127: bitmap initialized from disk: read 1 pages, set 0 of 14831 bits
[ 58.629099] md127: detected capacity change from 0 to 995236642816
[ 58.629884] md: bind<sda1>
[ 58.632111] md: bind<sda2>
[ 58.640096] md: bind<sdb2>
[ 58.643333] md/raid1:md125: active with 2 out of 2 mirrors
[ 58.643368] md125: detected capacity change from 0 to 536281088
[ 58.653856] md127: unknown partition table
[ 58.669308] md125: unknown partition table
[ 58.678096] md: bind<sdb1>
[ 58.680007] md/raid1:md126: active with 2 out of 2 mirrors
[ 58.680053] md126: detected capacity change from 0 to 4290772992
[ 58.686240] md126: unknown partition table
[ 63.132849] systemd-journald[632]: Received request to flush runtime journal from PID 1
[ 73.919360] BTRFS: device label 0e35a911:data devid 1 transid 76 /dev/md127
[ 74.285276] Adding 523708k swap on /dev/md125. Priority:-1 extents:1 across:523708k FS
[ 719.673597] BTRFS: device label 0e35a911:data devid 1 transid 76 /dev/md127
[ 719.674570] BTRFS info (device md127): disk space caching is enabled
[ 719.714024] SELinux: initialized (dev md127, type btrfs), uses xattr
Status of /dev/mdx (x=127, 126 and 125) in detail:
[root@localhost ~]# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Wed Jul 16 11:03:50 2014
Raid Level : raid1
Array Size : 971910784 (926.89 GiB 995.24 GB)
Used Dev Size : 971910784 (926.89 GiB 995.24 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Jul 30 04:02:48 2014
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : 0e35a911:data-0
UUID : 1617bc13:e5576688:64018e5b:0ca61008
Events : 1759
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 3 1 active sync /dev/sda3
[root@localhost ~]# mdadm --detail /dev/md126
/dev/md126:
Version : 1.2
Creation Time : Wed Jul 16 11:03:50 2014
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Jul 28 11:46:35 2014
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : 0e35a911:0
UUID : 7062aefd:dcbae9c7:1dd84684:e7744d65
Events : 23
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 1 1 active sync /dev/sda1
[root@localhost ~]# mdadm --detail /dev/md125
/dev/md125:
Version : 1.2
Creation Time : Wed Jul 16 11:03:50 2014
Raid Level : raid1
Array Size : 523712 (511.52 MiB 536.28 MB)
Used Dev Size : 523712 (511.52 MiB 536.28 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Jul 21 10:11:11 2014
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : 0e35a911:1
UUID : 767a9300:bd522aac:366459ee:712f6896
Events : 19
Number Major Minor RaidDevice State
0 8 18 0 active sync /dev/sdb2
1 8 2 1 active sync /dev/sda2
After trying I'm now quite confident about BTRFS data recovery option using an (updated) Linux box: the only test we should do now could be to test a degraded array.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!