NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Westyfield2
Apr 21, 2019Tutor
No Volume Exists - Remove inactive volumes in order to use the disk
Hi,
Running v6.9.3
After a reboot the NAS came up saying "No Volume Exists".
Have done another couple of shutdowns and startups and it's now saying "Remove inactive volumes to use the disk....
Sandshark
Apr 25, 2019Sensei - Experienced User
What are the results of lsblk?
Westyfield2
Apr 25, 2019Tutor
admin@NAS:/$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.8T 0 disk ├─sda1 8:1 0 4G 0 part │ └─md0 9:0 0 4G 0 raid1 / ├─sda2 8:2 0 512M 0 part │ └─md1 9:1 0 2G 0 raid6 [SWAP] └─sda3 8:3 0 1.8T 0 part └─md126 9:126 0 9.1T 0 raid5 sdb 8:16 0 5.5T 0 disk ├─sdb1 8:17 0 4G 0 part │ └─md0 9:0 0 4G 0 raid1 / ├─sdb2 8:18 0 512M 0 part │ └─md1 9:1 0 2G 0 raid6 [SWAP] ├─sdb3 8:19 0 1.8T 0 part │ └─md126 9:126 0 9.1T 0 raid5 ├─sdb4 8:20 0 1.8T 0 part │ └─md125 9:125 0 3.7T 0 raid5 └─sdb5 8:21 0 1.8T 0 part └─md127 9:127 0 1.8T 0 raid1 sdc 8:32 0 5.5T 0 disk ├─sdc1 8:33 0 4G 0 part │ └─md0 9:0 0 4G 0 raid1 / ├─sdc2 8:34 0 512M 0 part │ └─md1 9:1 0 2G 0 raid6 [SWAP] ├─sdc3 8:35 0 1.8T 0 part │ └─md126 9:126 0 9.1T 0 raid5 ├─sdc4 8:36 0 1.8T 0 part │ └─md125 9:125 0 3.7T 0 raid5 └─sdc5 8:37 0 1.8T 0 part └─md127 9:127 0 1.8T 0 raid1 sdd 8:48 0 1.8T 0 disk ├─sdd1 8:49 0 4G 0 part │ └─md0 9:0 0 4G 0 raid1 / ├─sdd2 8:50 0 512M 0 part │ └─md1 9:1 0 2G 0 raid6 [SWAP] └─sdd3 8:51 0 1.8T 0 part └─md126 9:126 0 9.1T 0 raid5 sde 8:64 0 1.8T 0 disk ├─sde1 8:65 0 4G 0 part │ └─md0 9:0 0 4G 0 raid1 / ├─sde2 8:66 0 512M 0 part │ └─md1 9:1 0 2G 0 raid6 [SWAP] └─sde3 8:67 0 1.8T 0 part └─md126 9:126 0 9.1T 0 raid5 sdf 8:80 0 3.7T 0 disk ├─sdf1 8:81 0 4G 0 part │ └─md0 9:0 0 4G 0 raid1 / ├─sdf2 8:82 0 512M 0 part │ └─md1 9:1 0 2G 0 raid6 [SWAP] ├─sdf3 8:83 0 1.8T 0 part │ └─md126 9:126 0 9.1T 0 raid5 └─sdf4 8:84 0 1.8T 0 part └─md125 9:125 0 3.7T 0 raid5
- SandsharkApr 25, 2019Sensei - Experienced User
OK, I just recently went though this exercise. See How-to-recover-from-Remove-inactive-volumes-error.
I see your drives, in order of bays 1-6 are 2TB, 6TB, 6TB, 2TB, 2TB, 3TB. These form three RAID layers, md127, md126, and md125. All the RAIDs seem intact, but the BTRFS file system didn't mount. If this differs from what you believe you have, then the rest of this is not going to work.
Try using cat /proc/mdstat to verify all the RAID layers are healthy. If any are re-syncing, let them finish. Next, mdadm --detail /dev/md127 (and md126 and md125), will show you (among other things) the array names, which should be an 8-digit hex host name followed by a colon and data-0, data-1, and data-2, assuming a single, standard XRAID volume named "data". Also do a cat /etc/fstab to see if all your data volumes (which I assume is just the one) are listed. It should look something like this:
LABEL=43f6464e:data /data btrfs defaults 0 0
If all looks right, then mount --all should be all you need.
If the RAID layers don't have the right names, then you are going to have to re-assemble them with the right ones. I've not gone that far. If fstab doesn't include your data volume, you're going to have to add it. That 8-digit hex code in my example is the host ID again.
Let us know what works, or post anything that looks out of kilter from the commands I listed.
- Westyfield2Apr 26, 2019Tutor
Looks like md125 isn't happy.
cat /proc/mdstat Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md125 : active raid5 sdc4[0] sdb4[2] sdf4[1] 3906749824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] md126 : active raid5 sda3[0] sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[6] 9743313920 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU] md127 : active raid1 sdc5[0] sdb5[1] 1953372928 blocks super 1.2 [2/2] [UU] md1 : active raid6 sda2[0] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1] 2093056 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] md0 : active raid1 sda1[0] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[6] 4190208 blocks super 1.2 [6/6] [UUUUUU] unused devices: <none>
mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Fri Dec 16 23:36:47 2016 Raid Level : raid1 Array Size : 1953372928 (1862.88 GiB 2000.25 GB) Used Dev Size : 1953372928 (1862.88 GiB 2000.25 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon Apr 15 20:35:57 2019 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : 33eadf27:data-2 (local to host 33eadf27) UUID : a4343df5:9ab3de78:5fa43f83:8f0c56b5 Events : 232 Number Major Minor RaidDevice State 0 8 37 0 active sync /dev/sdc5 1 8 21 1 active sync /dev/sdb5
mdadm --detail /dev/md126 /dev/md126: Version : 1.2 Creation Time : Wed Oct 5 18:01:25 2016 Raid Level : raid5 Array Size : 9743313920 (9291.95 GiB 9977.15 GB) Used Dev Size : 1948662784 (1858.39 GiB 1995.43 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Fri Apr 26 16:59:04 2019 State : clean Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : 33eadf27:data-0 (local to host 33eadf27) UUID : 64c2c473:9f377754:501cbcc3:bf5a752e Events : 1232 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 6 8 19 1 active sync /dev/sdb3 2 8 35 2 active sync /dev/sdc3 3 8 51 3 active sync /dev/sdd3 4 8 67 4 active sync /dev/sde3 5 8 83 5 active sync /dev/sdf3
mdadm --detail /dev/md125 /dev/md125: Version : 1.2 Creation Time : Wed Oct 5 18:02:22 2016 Raid Level : raid5 Array Size : 3906749824 (3725.77 GiB 4000.51 GB) Used Dev Size : 1953374912 (1862.88 GiB 2000.26 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Fri Apr 26 16:59:04 2019 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : 33eadf27:data-1 (local to host 33eadf27) UUID : 699ba84d:86b18c86:edf75682:8aa9843a Events : 5574 Number Major Minor RaidDevice State 0 8 36 0 active sync /dev/sdc4 1 8 84 1 active sync /dev/sdf4 2 8 20 2 active sync /dev/sdb4
cat /etc/fstab LABEL=33eadf27:data /data btrfs defaults 0 0
mount --all mount: /dev/md125: can't read superblock
- SandsharkApr 26, 2019Sensei - Experienced User
You could try stopping and re-assembling md125, but I fear it will do no good. Maybe try booting (read-only would be best to try first) each set of 5 without one of the md125 elements --the 3TB and 6TB's.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!