NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
console1
Nov 06, 2012Aspirant
NVX: paths for the shares listed below could[Case #19821175]
I upgraded one of the drives, every thing was fine, expanded fine and I was accessing all files fine till today when: Mon Nov 5 20:33:55 PST 2012 System is up. Mon Nov 5 20:33:55 PST 2012 The pa...
console1
Nov 20, 2012Aspirant
I guess I am on my own.
What causes all the superblocks copies to be corrupt?
"According to our Level 3 Expert, all backup superblocks of your ReadyNAS had been destroyed. He tried all possible steps to recover the shares but he was unsuccessful. Data cannot be recovered anymore. I sincerely apologize for this inconvenience."
so I logged in to see if I can do some recovery after level 3 unsuccessful try. after search the forum inside and out:
md* volume state are all clean. but:
e2fsck: unable to set superblock flags on /dev/c/c
Any suggestion on what to try next?
Here is output of some of the commands:
# mdadm -E -s
ARRAY /dev/md/3 level=raid5 metadata=1.2 num-devices=3 UUID=1c397b13:3dcbf756:97044c5e:f32f1fad name=00223FAA11D8:3
ARRAY /dev/md/2 level=raid5 metadata=1.2 num-devices=4 UUID=eee7d234:6e0d5103:2ec686b0:a10a2e74 name=00223FAA11D8:2
ARRAY /dev/md/1 level=raid6 metadata=1.2 num-devices=4 UUID=62bc3ab8:b3b480c7:3fe0c84b:ed4e0bcd name=00223FAA11D8:1
ARRAY /dev/md/0 level=raid1 metadata=1.2 num-devices=4 UUID=7fd03cc1:2db7e12d:6803bd5d:8b7e5748 name=00223FAA11D8:0
# echo DEVICE partitions > /etc/mdadm.conf
# ls /etc/mdadm.conf
/etc/mdadm.conf
# mdadm -E -s >> /etc/mdadm.conf
# cat /etc/mdadm.conf
DEVICE partitions
ARRAY /dev/md/3 level=raid5 metadata=1.2 num-devices=3 UUID=1c397b13:3dcbf756:97044c5e:f32f1fad name=00223FAA11D8:3
ARRAY /dev/md/2 level=raid5 metadata=1.2 num-devices=4 UUID=eee7d234:6e0d5103:2ec686b0:a10a2e74 name=00223FAA11D8:2
ARRAY /dev/md/1 level=raid6 metadata=1.2 num-devices=4 UUID=62bc3ab8:b3b480c7:3fe0c84b:ed4e0bcd name=00223FAA11D8:1
ARRAY /dev/md/0 level=raid1 metadata=1.2 num-devices=4 UUID=7fd03cc1:2db7e12d:6803bd5d:8b7e5748 name=00223FAA11D8:0
# mdadm --assemble --scan
mdadm: /dev/md/3 has been started with 3 drives.
mdadm: /dev/md/2 has been started with 4 drives.
mdadm: /dev/md/1 has been started with 4 drives.
mdadm: /dev/md/0 has been started with 4 drives.
# mdadm -Q --detail /dev/md0
/dev/md0:
Version : 1.02
Creation Time : Sat Feb 20 15:11:12 2010
Raid Level : raid1
Array Size : 4194292 (4.00 GiB 4.29 GB)
Used Dev Size : 4194292 (4.00 GiB 4.29 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Nov 12 05:52:59 2012
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Name : 00223FAA11D8:0
UUID : 7fd03cc1:2db7e12d:6803bd5d:8b7e5748
Events : 1024
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
6 8 17 1 active sync /dev/sdb1
5 8 33 2 active sync /dev/sdc1
4 8 49 3 active sync /dev/sdd1
# mdadm -Q --detail /dev/md1
/dev/md1:
Version : 1.02
Creation Time : Sun Aug 29 06:20:51 2010
Raid Level : raid6
Array Size : 1048448 (1024.05 MiB 1073.61 MB)
Used Dev Size : 524224 (512.02 MiB 536.81 MB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Mon Nov 12 05:49:29 2012
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Chunk Size : 64K
Name : 00223FAA11D8:1
UUID : 62bc3ab8:b3b480c7:3fe0c84b:ed4e0bcd
Events : 76
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
5 8 18 1 active sync /dev/sdb2
4 8 34 2 active sync /dev/sdc2
3 8 50 3 active sync /dev/sdd2
dm -Q --detail /dev/md2
/dev/md2:
Version : 1.02
Creation Time : Sat Feb 20 15:11:12 2010
Raid Level : raid5
Array Size : 2916123888 (2781.03 GiB 2986.11 GB)
Used Dev Size : 972041296 (927.01 GiB 995.37 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Mon Nov 12 05:49:33 2012
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 16K
Name : 00223FAA11D8:2
UUID : eee7d234:6e0d5103:2ec686b0:a10a2e74
Events : 23129
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
6 8 21 1 active sync /dev/sdb5
5 8 37 2 active sync /dev/sdc5
4 8 53 3 active sync /dev/sdd5
# mdadm -Q --detail /dev/md3
/dev/md3:
Version : 1.02
Creation Time : Tue Aug 31 01:45:22 2010
Raid Level : raid5
Array Size : 1953501568 (1863.00 GiB 2000.39 GB)
Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Mon Nov 12 05:49:25 2012
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : 00223FAA11D8:3
UUID : 1c397b13:3dcbf756:97044c5e:f32f1fad
Events : 4096
Number Major Minor RaidDevice State
2 8 38 0 active sync /dev/sdc6
1 8 54 1 active sync /dev/sdd6
3 8 22 2 active sync /dev/sdb6
# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0] sdd1[4] sdc1[5] sdb1[6]
4194292 blocks super 1.2 [4/4] [UUUU]
md1 : active raid6 sda2[0] sdd2[3] sdc2[4] sdb2[5]
1048448 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
md2 : active raid5 sda5[0] sdd5[4] sdc5[5] sdb5[6]
2916123888 blocks super 1.2 level 5, 16k chunk, algorithm 2 [4/4] [UUUU]
md3 : active raid5 sdc6[2] sdb6[3] sdd6[1]
1953501568 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
# pvscan
PV /dev/md2 VG c lvm2 [2.72 TB / 0 free]
PV /dev/md3 VG c lvm2 [1.82 TB / 10.00 GB free]
Total: 2 [2.54 TB] / in use: 2 [2.54 TB] / in no VG: 0 [0 ]
# vgscan
Reading all physical volumes. This may take a while...
Found volume group "c" using metadata type lvm2
# vgchange -ay c
1 logical volume(s) in volume group "c" now active
# e2fsck /dev/c/c
e2fsck 1.41.6 (30-May-2009)
e2fsck: Group descriptors look bad... trying backup blocks...
/dev/c/c: recovering journal
e2fsck: unable to set superblock flags on /dev/c/c
What causes all the superblocks copies to be corrupt?
"According to our Level 3 Expert, all backup superblocks of your ReadyNAS had been destroyed. He tried all possible steps to recover the shares but he was unsuccessful. Data cannot be recovered anymore. I sincerely apologize for this inconvenience."
so I logged in to see if I can do some recovery after level 3 unsuccessful try. after search the forum inside and out:
md* volume state are all clean. but:
e2fsck: unable to set superblock flags on /dev/c/c
Any suggestion on what to try next?
Here is output of some of the commands:
# mdadm -E -s
ARRAY /dev/md/3 level=raid5 metadata=1.2 num-devices=3 UUID=1c397b13:3dcbf756:97044c5e:f32f1fad name=00223FAA11D8:3
ARRAY /dev/md/2 level=raid5 metadata=1.2 num-devices=4 UUID=eee7d234:6e0d5103:2ec686b0:a10a2e74 name=00223FAA11D8:2
ARRAY /dev/md/1 level=raid6 metadata=1.2 num-devices=4 UUID=62bc3ab8:b3b480c7:3fe0c84b:ed4e0bcd name=00223FAA11D8:1
ARRAY /dev/md/0 level=raid1 metadata=1.2 num-devices=4 UUID=7fd03cc1:2db7e12d:6803bd5d:8b7e5748 name=00223FAA11D8:0
# echo DEVICE partitions > /etc/mdadm.conf
# ls /etc/mdadm.conf
/etc/mdadm.conf
# mdadm -E -s >> /etc/mdadm.conf
# cat /etc/mdadm.conf
DEVICE partitions
ARRAY /dev/md/3 level=raid5 metadata=1.2 num-devices=3 UUID=1c397b13:3dcbf756:97044c5e:f32f1fad name=00223FAA11D8:3
ARRAY /dev/md/2 level=raid5 metadata=1.2 num-devices=4 UUID=eee7d234:6e0d5103:2ec686b0:a10a2e74 name=00223FAA11D8:2
ARRAY /dev/md/1 level=raid6 metadata=1.2 num-devices=4 UUID=62bc3ab8:b3b480c7:3fe0c84b:ed4e0bcd name=00223FAA11D8:1
ARRAY /dev/md/0 level=raid1 metadata=1.2 num-devices=4 UUID=7fd03cc1:2db7e12d:6803bd5d:8b7e5748 name=00223FAA11D8:0
# mdadm --assemble --scan
mdadm: /dev/md/3 has been started with 3 drives.
mdadm: /dev/md/2 has been started with 4 drives.
mdadm: /dev/md/1 has been started with 4 drives.
mdadm: /dev/md/0 has been started with 4 drives.
# mdadm -Q --detail /dev/md0
/dev/md0:
Version : 1.02
Creation Time : Sat Feb 20 15:11:12 2010
Raid Level : raid1
Array Size : 4194292 (4.00 GiB 4.29 GB)
Used Dev Size : 4194292 (4.00 GiB 4.29 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Nov 12 05:52:59 2012
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Name : 00223FAA11D8:0
UUID : 7fd03cc1:2db7e12d:6803bd5d:8b7e5748
Events : 1024
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
6 8 17 1 active sync /dev/sdb1
5 8 33 2 active sync /dev/sdc1
4 8 49 3 active sync /dev/sdd1
# mdadm -Q --detail /dev/md1
/dev/md1:
Version : 1.02
Creation Time : Sun Aug 29 06:20:51 2010
Raid Level : raid6
Array Size : 1048448 (1024.05 MiB 1073.61 MB)
Used Dev Size : 524224 (512.02 MiB 536.81 MB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Mon Nov 12 05:49:29 2012
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Chunk Size : 64K
Name : 00223FAA11D8:1
UUID : 62bc3ab8:b3b480c7:3fe0c84b:ed4e0bcd
Events : 76
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
5 8 18 1 active sync /dev/sdb2
4 8 34 2 active sync /dev/sdc2
3 8 50 3 active sync /dev/sdd2
dm -Q --detail /dev/md2
/dev/md2:
Version : 1.02
Creation Time : Sat Feb 20 15:11:12 2010
Raid Level : raid5
Array Size : 2916123888 (2781.03 GiB 2986.11 GB)
Used Dev Size : 972041296 (927.01 GiB 995.37 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Mon Nov 12 05:49:33 2012
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 16K
Name : 00223FAA11D8:2
UUID : eee7d234:6e0d5103:2ec686b0:a10a2e74
Events : 23129
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
6 8 21 1 active sync /dev/sdb5
5 8 37 2 active sync /dev/sdc5
4 8 53 3 active sync /dev/sdd5
# mdadm -Q --detail /dev/md3
/dev/md3:
Version : 1.02
Creation Time : Tue Aug 31 01:45:22 2010
Raid Level : raid5
Array Size : 1953501568 (1863.00 GiB 2000.39 GB)
Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Mon Nov 12 05:49:25 2012
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Name : 00223FAA11D8:3
UUID : 1c397b13:3dcbf756:97044c5e:f32f1fad
Events : 4096
Number Major Minor RaidDevice State
2 8 38 0 active sync /dev/sdc6
1 8 54 1 active sync /dev/sdd6
3 8 22 2 active sync /dev/sdb6
# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0] sdd1[4] sdc1[5] sdb1[6]
4194292 blocks super 1.2 [4/4] [UUUU]
md1 : active raid6 sda2[0] sdd2[3] sdc2[4] sdb2[5]
1048448 blocks super 1.2 level 6, 64k chunk, algorithm 2 [4/4] [UUUU]
md2 : active raid5 sda5[0] sdd5[4] sdc5[5] sdb5[6]
2916123888 blocks super 1.2 level 5, 16k chunk, algorithm 2 [4/4] [UUUU]
md3 : active raid5 sdc6[2] sdb6[3] sdd6[1]
1953501568 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
# pvscan
PV /dev/md2 VG c lvm2 [2.72 TB / 0 free]
PV /dev/md3 VG c lvm2 [1.82 TB / 10.00 GB free]
Total: 2 [2.54 TB] / in use: 2 [2.54 TB] / in no VG: 0 [0 ]
# vgscan
Reading all physical volumes. This may take a while...
Found volume group "c" using metadata type lvm2
# vgchange -ay c
1 logical volume(s) in volume group "c" now active
# e2fsck /dev/c/c
e2fsck 1.41.6 (30-May-2009)
e2fsck: Group descriptors look bad... trying backup blocks...
/dev/c/c: recovering journal
e2fsck: unable to set superblock flags on /dev/c/c
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!