- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- « Previous
-
- 1
- 2
- Next »
Re: ReadyNAS NV+ v2 - Cant access shares, failed f/w update etc
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: ReadyNAS NV+ v2 - Cant access shares, failed f/w update etc
@snoofy wrote:
I changes disk 2 back to the old disk and rerun the mdadm command I get
root@readyNAS:~# mdadm --assemble /dev/md2 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
mdadm: cannot open device /dev/sda3: Device or resource busy
mdadm: /dev/sda3 has no superblock - assembly aborted
try running
cat /proc/mdstat
and see if /dev/md2 is listed.
If it isn't, then try
root@readyNAS:~# mdadm --assemble --force /dev/md2 /dev/sdb3 /dev/sdc3 /dev/sdd3
and if that appears to work, try running the cat command again.
If md2 is listed, then try the vgscan bit again.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: ReadyNAS NV+ v2 - Cant access shares, failed f/w update etc
ok so
root@readyNAS:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : inactive sda3[1](S) sdc3[4](S) sdd3[2](S) sdb3[0](S)
11702179554 blocks super 1.2
md1 : active raid1 sdb2[5] sda2[1] sdc2[4] sdd2[2]
524276 blocks super 1.2 [4/4] [UUUU]
md0 : active raid1 sdb1[4] sda1[1] sdd1[2] sdc1[3]
4193268 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
but vgscan wont give me anything but "No volume groups found"
What's interesting though... if I run vgscan with a couple of "-vvv" I see md2 only mentioned once in there (in comparison to md0 and md1)
root@readyNAS:~# vgscan -vv
Setting global/locking_type to 1
Setting global/wait_for_locks to 1
File-based locking selected.
Setting global/locking_dir to /var/lock/lvm
Locking /var/lock/lvm/P_global WB
Wiping cache of LVM-capable devices
/dev/core: stat failed: No such file or directory
Wiping internal VG cache
Reading all physical volumes. This may take a while...
Finding all volume groups
/dev/loop0: size is 0 sectors
/dev/sda: size is 5860533168 sectors
/dev/md0: size is 8386536 sectors
/dev/md0: size is 8386536 sectors
/dev/md0: No label detected
/dev/loop1: size is 0 sectors
/dev/sda1: size is 8388608 sectors
/dev/sda1: size is 8388608 sectors
/dev/md1: size is 1048552 sectors
/dev/md1: size is 1048552 sectors
/dev/md1: No label detected
/dev/loop2: size is 0 sectors
/dev/sda2: size is 1048576 sectors
/dev/sda2: size is 1048576 sectors
/dev/md2: size is 0 sectors
/dev/loop3: size is 0 sectors
/dev/sda3: size is 5851091825 sectors
/dev/sda3: size is 5851091825 sectors
/dev/loop4: size is 0 sectors
/dev/loop5: size is 0 sectors
/dev/loop6: size is 0 sectors
/dev/loop7: size is 0 sectors
/dev/sdb: size is 5860533168 sectors
/dev/sdb1: size is 8388608 sectors
/dev/sdb1: size is 8388608 sectors
/dev/sdb2: size is 1048576 sectors
/dev/sdb2: size is 1048576 sectors
/dev/sdb3: size is 5851091825 sectors
/dev/sdb3: size is 5851091825 sectors
/dev/sdc: size is 5860533168 sectors
/dev/sdc1: size is 8388608 sectors
/dev/sdc1: size is 8388608 sectors
/dev/sdc2: size is 1048576 sectors
/dev/sdc2: size is 1048576 sectors
/dev/sdc3: size is 5851091825 sectors
/dev/sdc3: size is 5851091825 sectors
/dev/sdd: size is 5860533168 sectors
/dev/sdd1: size is 8388608 sectors
/dev/sdd1: size is 8388608 sectors
/dev/sdd2: size is 1048576 sectors
/dev/sdd2: size is 1048576 sectors
/dev/sdd3: size is 5851091825 sectors
/dev/sdd3: size is 5851091825 sectors
No volume groups found
Unlocking /var/lock/lvm/P_global
For example, md0 is listed three times
/dev/md0: size is 8386536 sectors
/dev/md0: size is 8386536 sectors
/dev/md0: No label detected
while md2 is only listed once ...
/dev/md2: size is 0 sectors
and
root@readyNAS:~# lvmdiskscan
/dev/md0 [ 4,00 GiB]
/dev/md1 [ 511,99 MiB]
0 disks
2 partitions
0 LVM physical volume whole disks
0 LVM physical volumes
and
root@readyNAS:~# vgscan --mknodes -vv
Setting global/locking_type to 1
Setting global/wait_for_locks to 1
File-based locking selected.
Setting global/locking_dir to /var/lock/lvm
Locking /var/lock/lvm/P_global WB
Wiping cache of LVM-capable devices
/dev/core: stat failed: No such file or directory
Wiping internal VG cache
Reading all physical volumes. This may take a while...
Finding all volume groups
/dev/loop0: size is 0 sectors
/dev/sda: size is 5860533168 sectors
/dev/md0: size is 8386536 sectors
/dev/md0: size is 8386536 sectors
/dev/md0: No label detected
/dev/loop1: size is 0 sectors
/dev/sda1: size is 8388608 sectors
/dev/sda1: size is 8388608 sectors
/dev/md1: size is 1048552 sectors
/dev/md1: size is 1048552 sectors
/dev/md1: No label detected
/dev/loop2: size is 0 sectors
/dev/sda2: size is 1048576 sectors
/dev/sda2: size is 1048576 sectors
/dev/md2: size is 0 sectors
/dev/loop3: size is 0 sectors
/dev/sda3: size is 5851091825 sectors
/dev/sda3: size is 5851091825 sectors
/dev/loop4: size is 0 sectors
/dev/loop5: size is 0 sectors
/dev/loop6: size is 0 sectors
/dev/loop7: size is 0 sectors
/dev/sdb: size is 5860533168 sectors
/dev/sdb1: size is 8388608 sectors
/dev/sdb1: size is 8388608 sectors
/dev/sdb2: size is 1048576 sectors
/dev/sdb2: size is 1048576 sectors
/dev/sdb3: size is 5851091825 sectors
/dev/sdb3: size is 5851091825 sectors
/dev/sdc: size is 5860533168 sectors
/dev/sdc1: size is 8388608 sectors
/dev/sdc1: size is 8388608 sectors
/dev/sdc2: size is 1048576 sectors
/dev/sdc2: size is 1048576 sectors
/dev/sdc3: size is 5851091825 sectors
/dev/sdc3: size is 5851091825 sectors
/dev/sdd: size is 5860533168 sectors
/dev/sdd1: size is 8388608 sectors
/dev/sdd1: size is 8388608 sectors
/dev/sdd2: size is 1048576 sectors
/dev/sdd2: size is 1048576 sectors
/dev/sdd3: size is 5851091825 sectors
/dev/sdd3: size is 5851091825 sectors
No volume groups found
Finding all logical volumes
No volume groups found
Unlocking /var/lock/lvm/P_global
🤔🙄
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: ReadyNAS NV+ v2 - Cant access shares, failed f/w update etc
Sounds like something has corrupted the LVM structures. I guess you could try recovery software - perhaps R-Studio
- « Previous
-
- 1
- 2
- Next »