- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
Re: Remove inactive volumes to use the disk. Disk 1,2,3
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Remove inactive volumes to use the disk. Disk 1,2,3
Hello,
I'm having this issue Remove Inactive Volumes 1.2,3,
the only option is to format. is there a way to fix.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Remove inactive volumes to use the disk. Disk 1,2,3
There are multiple causes here, so the best way to resolve it w/o a factory reset is to contact paid support.
If you have the linux skills, you could sort out whether the problem is related to RAID or the file system via the linux cli.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Remove inactive volumes to use the disk. Disk 1,2,3
I can take a look at the logs as well if you like.
You could download them from the NAS and upload via Google link (or similar) and PM me.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Remove inactive volumes to use the disk. Disk 1,2,3
it was out of the blue. No idead what happened. only issue was space was 91% full
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: Remove inactive volumes to use the disk. Disk 1,2,3
Hi @shafy
Thanks for sending over the logs.
The reason that you are getting the "Remove Inactive Volumes" error is because your data volume does not mount.
The raids are running fine except it looks like you once had 3 disks in the NAS and now you only have 2?
md127 : active raid5 sda3[0] sdb3[2] 1943825664 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/2] [U_U] <<<=== missing disk here.
A raid 5, like yours, will run with one disk gone from the raid. However, using the NAS in this state is risky as one more disk failure will down the data raid. That being said, your two remaining disks are healthy and the raid is running so that is not the problem.
Looking further, I can see that the filesystem is complaining about, what looks like like, inconsistency between the filesystem and the journal. Perhaps a write or delete call got stuck in the journal but as the journal does not match the filesystem state anymore, BTRFS complains and it will refuse to mount.
Mar 08 20:25:10 kernel: BTRFS: device label xxxxxx:data devid 1 transid 677460 /dev/md127 Mar 08 20:25:10 start_raids[1383]: parent transid verify failed on 616398848 wanted 677461 found 677460 Mar 08 20:25:10 start_raids[1383]: parent transid verify failed on 616398848 wanted 677461 found 677460 Mar 08 20:25:10 start_raids[1383]: parent transid verify failed on 616398848 wanted 677461 found 677460 Mar 08 20:25:16 kernel: BTRFS error (device md127): parent transid verify failed on 616398848 wanted 677462 found 677460 Mar 08 20:25:16 kernel: BTRFS error (device md127): parent transid verify failed on 616398848 wanted 677462 found 677460 Mar 08 20:25:16 kernel: BTRFS warning (device md127): failed to read log tree Mar 08 20:25:16 kernel: BTRFS error (device md127): open_ctree failed
This can usually be fixed via a zero-log command as that clears the filesystem journal. That can potentially be dangerous though and thus if data is important it should be attempted to mount the data in recovery-read-only mode first.
A little more concerning is this entry here.
Mar 08 20:25:16 start_raids[1383]: mount: /dev/md/data-0: can't read superblock
It tell us that there is a corrupt block somewhere in the filesystem. It might not be anything critical but we don't know.
I reckon that a BTRFS repair could fix this but again, it is potentially dangerous to run.
So, the questions at this point are:
1. Is the data important?
- If no, then factory reset and start over.
2. Do you have a backup?
- If yes, then factory reset and start over and restore from backups.
3. If the data is important and you don't have a backup then it is time to look at filesystem recovery options.
My opinion is that this is likely recoverable but there are some precautions that needs to be taken.
Also, as this is a Pro 4 unit running OS6 - I am not sure how willing NETGEAR would be to help here, even against a paid data recovery contract. It might be worth querying anyway.
If you are handy with Linux and is comfortable running some commands, I can help give some advise on safe commands to run in order to try and at least get the data back online in order to carry out a backup.
By the way, the 91% warning is for the OS. It is filling up and I suspect Plex is doing something here. However, that is the least of the concerns right now and can be fixed later.
19/03/09 08:00:12 +04] crit:volume:LOGMSG_SYSTEM_USAGE_WARN System volume root's usage is 91%. This condition should not occur under normal conditions. Contact technical support.
Cheers