NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

nope_mx5's avatar
nope_mx5
Aspirant
Mar 17, 2013

[HOWTO FIX] no volume, missing superblock, missing device

Had the same issue as majello in this thread:
http://www.readynas.com/forum/viewtopic.php?f=64&t=68130#p378798

I had already replaced drive 1 & 2 with no problems, everything was resynced and working fine.

When I replaced the 3rd drive, something did not complete correctly, and I ended up removing the drive, shutting the system down, and
powering up the nas with only 5 drives (which should be no problem).

I'm pretty sure I might have been a bit to hasty here.
I should probably have waited a few hours just to make sure it was not working in the background during boot.

After removing the drive and booting up, I got the same issue with no volumes found, and the device /dev/c/c might not exist.

I called netgear, and got a case#, and they wanted me to reset the password to default, enable ssh,
and port-forward so they could connect to the nas remotely. As I work as a Linux admin, and also are concerned about network security,
I was not totally comfortable opening my network for everyone, and said they could connect via teamviewer, and call me when they would start
troubleshooting. That way I could "control" who got access to my network.

They made a note of this in the case, and I was told to expect someone to contact me within 24 hours.
After a couple of hours, I started thinking that the data on the system is not very important, mostly I use it for iscsi targets
for my esxi lab environment, so it would be OK if I had to rebuild from scratch.
That's when I decided to do the same as majello, and start investigating the issue.

Pretty much everything worked out as it did for majello, at least until I tried to reassemble the array for md3.

Here's some of the output I got:

readynas:~# cat /proc/mdstat

Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sda5[6] sdf5[5] sde5[4] sdd5[3] sdc5[8] sdb5[7]
4860200960 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]

md1 : active raid6 sda2[6] sdf2[5] sde2[4] sdd2[3] sdc2[8] sdb2[7]
2096896 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]

md0 : active raid1 sda1[6] sdf1[5] sde1[4] sdd1[3] sdc1[8] sdb1[7]
4193268 blocks super 1.2 [6/6] [UUUUUU]

unused devices: <none>

lvscan
missing device uuid ...


cat /etc/lvm/backup/c

This pointed me to the device that had the missing uuid: md3.

fdisk -l

This showed that sda,sdb and sdc had 1 extra partition in comparison to the 3 last drives.
according to mdstat, md3 should contain sdc6, sdb6 and sda6, as these were not part of either md0 or md1

mdstat still showed recovery on md2 (due to new drive in slot 3)
Waited for this to complete.

lvscan showed no volumes after re-sync, md3 was still MIA.

I tried to do the mdadm --assemble /dev/md3 sdc6 sdb6 sda6, but this failed, and I had to investigate a bit further.

mdstat -E /dev/sda6
mdstat -E /dev/sdb6

These showed a valid superblock on both partitions.

mdstat -E /dev/sdc6

Here the error showed itself: no superblock found

OK, let's recreate the array in degraded state:
mdadm --create /dev/md3 --verbose --level=5 --raid-devices=3 /dev/sda6 /dev/sdb6 missing

mdadm is smart enough to see that the drives are part of an existing array, so the data in the array is still intact.

Poof, md3 started with 2 devices.

lvscan suddenly showed c volume inactive.
lvchange -a y c

This activated the volume group without errors.

cat /proc/mdstat

This showed all 3 arrays, md3 in degraded, but active state.

mount /dev/c/c /c/

This mounted everything as it should be, and I got access to the shares from my computer and I copied out some data I wanted to keep, just to be sure.

At this time, Frontview did not show the volume information at all.
It showed 0 free on all drives, and no volumes, but it didn't say no volumes found anymore.
Seems it was pretty confused about what was happening :)

After I got the data copied out, I figured I might as well try to add the sdc6, and see if things would rebuild.

mdadm --manage /dev/md3 --add /dev/sdc6

This added the final partition to the array, and cat /proc/mdstat showed md3 was rebuilding.

When the rebuild was done, Frontview suddenly showed the volume correctly, iscsi worked, and everything was back to normal.

Big thanks to majello for pointing us in the correct direction :)

Still waiting for Netgear support to contact me, but at least I can tell them I fixed it myself when they do contact me :)

-n
No RepliesBe the first to reply

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More