NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
elutris
Nov 24, 2014Aspirant
Rebuilding partition after "corrupt root"? #24201840
Recently had a "Corrupt Root" take down my RN316 unit. "Reinstall OS" failed to work, as the root partition could not be mounted, so proceeded to use mfg disk utility to check all 6 WD drives - all passed.
Hearty thanks to Stanman130's thread "OS data Recovery" (http://www.readynas.com/forum/viewtopic.php?f=50&t=75900). After building a Fedora system so that I had an alternate system to mount the drives, I validated that the /data partition (/dev/dm127) was in perfect order, and only the OS partition was corrupt.
It seems like the easiest restoration step would be to initialize each of the /dev/sdb1, /dev/sdd1,...,/dev/sdg1 partitions (making up the /dev/md125 OS partition), thus enabling the ReadyNAS "OS Reinstall" to automatically rebuild the OS. Yes, 3 disks are missing for a rebuild, but if I can zero them out and reinstall the OS I can work on replacing / rebuilding Raid volumes once data is back online for users.
I am seeking any guidance (as I'm an advanced user, but certainly not a UNIX sysadmin level) on how this can be done, without risking the /data partitions by doing something stupid. Currently the volumes are only mount 'read only' on Fedora.
Any help is welcome. Backups are available, but at 10TB it's quicker to restore the OS partition an minimize further downtime if at all possible.
Thanks,
Chris
Hearty thanks to Stanman130's thread "OS data Recovery" (http://www.readynas.com/forum/viewtopic.php?f=50&t=75900). After building a Fedora system so that I had an alternate system to mount the drives, I validated that the /data partition (/dev/dm127) was in perfect order, and only the OS partition was corrupt.
$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md125 : active (auto-read-only) raid5 sde3[2] sdg3[4] sdd3[0] sdb3[3] sdf3[1] sdc3[5]
19510833280 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
md126 : active (auto-read-only) raid1 sdd1[0] sde1[2] sdf1[1]
4192192 blocks super 1.2 [6/3] [UU___U]
md127 : active (auto-read-only) raid1 sdd2[0] sdg2[4] sdb2[3] sde2[2] sdc2[5] sdf2[1]
523968 blocks super 1.2 [6/6] [UUUUUU]
It seems like the easiest restoration step would be to initialize each of the /dev/sdb1, /dev/sdd1,...,/dev/sdg1 partitions (making up the /dev/md125 OS partition), thus enabling the ReadyNAS "OS Reinstall" to automatically rebuild the OS. Yes, 3 disks are missing for a rebuild, but if I can zero them out and reinstall the OS I can work on replacing / rebuilding Raid volumes once data is back online for users.
I am seeking any guidance (as I'm an advanced user, but certainly not a UNIX sysadmin level) on how this can be done, without risking the /data partitions by doing something stupid. Currently the volumes are only mount 'read only' on Fedora.
Any help is welcome. Backups are available, but at 10TB it's quicker to restore the OS partition an minimize further downtime if at all possible.
Thanks,
Chris
2 Replies
- mdgm-ntgrNETGEAR Employee RetiredCan you post the output of this
# mount /dev/md127 /mnt
# dmesg | tail
It appears in your Fedora system the OS partition is showing up as md127 and your data volume as md125 whereas in the NAS the OS partition would be md0 and the data volume md127 - elutrisAspirantYou are correct (doh!). This is what happens when I get back to something from Friday's efforts.
md125 (raid 5) is the 18.2TB data volume
md126 is the OS partition
md127 is the swap
root@OALinux cwmiller]# mount /dev/md127 /mnt
mount: unknown filesystem type 'swap'
[root@OALinux cwmiller]# dmesg | tail
[ 307.797479] RPC: Registered tcp transport module.
[ 307.797480] RPC: Registered tcp NFSv4.1 backchannel transport module.
[ 307.822505] FS-Cache: Netfs 'nfs' registered for caching
[ 591.935707] Key type dns_resolver registered
[ 591.955690] FS-Cache: Netfs 'cifs' registered for caching
[ 591.955823] Key type cifs.spnego registered
[ 591.955831] Key type cifs.idmap registered
[ 592.155528] SELinux: initialized (dev cifs, type cifs), uses genfs_contexts
[ 781.960104] BTRFS info (device md125): disk space caching is enabled
[ 786.455290] SELinux: initialized (dev md125, type btrfs), uses xattr
[root@OALinux cwmiller]# ls[root@OALinux cwmiller]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md125 : active (auto-read-only) raid5 sdf3[1] sdd3[0] sde3[2] sdc3[5] sdb3[3] sdg3[4]
19510833280 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
md126 : active (auto-read-only) raid1 sdf1[1] sde1[2] sdd1[0]
4192192 blocks super 1.2 [6/3] [UU___U]
md127 : active (auto-read-only) raid1 sdf2[1] sde2[2] sdd2[0] sdb2[3] sdc2[5] sdg2[4]
523968 blocks super 1.2 [6/6] [UUUUUU]
unused devices: <none>
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!