× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

Retired_Member
Not applicable

Please remove inactive volumes in order to use the disk. Disk #1,2,3,4

This morning my NAS10400 didn't respond anymore.

After a reboot I get the message Please remove inactive volumes in order to use the disk. Disk #1,2,3,4

I searched this community at first, the only thing I read is that the logs have to be send to Netgear,

no possible actions or solutions, so I don't know where to look.

I have 4 x 3TB Western Digital Red label disks in X-Raid (RAID5).

When I look in the log in the web interface, the last time the data balance was complete was at December 10th.

(It is been scheduled weekly). Please help me. And also does someone can tell me how to prevent this in the future?
Thanks in advance,

 

Pascal

Model: RN10400|ReadyNAS 100 Series 4- Bay (Diskless)
Message 1 of 27
Retired_Member
Not applicable

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4

Before I did a reboot, the display of the NAS said: "assert_qgroups_uptodate+58"

Message 2 of 27
FramerV
NETGEAR Employee Retired

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4

Hi Pascal76,

 

We would really be needing the system logs before we can advise on any step. Please follow the steps found on the link:

 

How do I send all logs to ReadyNAS Community moderators?

 


Regards,

Message 3 of 27
Retired_Member
Not applicable

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

Thank you for your reply.

I send the logs.

Greetings and merry  Christmas,

 

Pascal

Message 4 of 27
FramerV
NETGEAR Employee Retired

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

Hi Pascal76,

 

I will be requesting that the logs be looked at. Happy holidays.

 

 

Regards,

Message 5 of 27
Retired_Member
Not applicable

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

Thank you.

Regards,

 

Pascal

Message 6 of 27
mdgm-ntgr
NETGEAR Employee Retired

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

Do you have a backup?

 

Have you tried booting into volume read-only mode?

Message 7 of 27
Retired_Member
Not applicable

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

I have a backup, but it's an old one. When I use this, I miss pictures I made the past 6 months.

When I had a recent backup, I wouldn't ask to spend your time at my problem.

I had contact with an engeneer from you, (s)he told me that he is unable to access the volume.

(S)he told me to do a data recovery, but after that, it's quiet for about 24hrs.

I do hope my data can be recovered.

Message 8 of 27
Retired_Member
Not applicable

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

"When I had a recent backup, I wouldn't ask to spend your time at my problem."

I want to say:"I wouldn't dare to ask".

Message 9 of 27
mdgm-ntgr
NETGEAR Employee Retired

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

Data recovery attempts inherently may be unsuccessful but we do make our best effort. Support can explain what costs are involved with a data recovery attempt.


Do you have a case number?

Message 10 of 27
Retired_Member
Not applicable

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

I didn't retreive a case number yet.

I am working on the problem too.

I have a btrfs fault. The following command sais:

#btrfs restore -F -i -D -v /dev/md127 /dev/null


checksum verify failed on 17616722329600 found 59A10048 wanted 83E42342
checksum verify failed on 17616722329600 found 59A10048 wanted 83E42342
checksum verify failed on 17616722329600 found 4BD9E359 wanted F5EBE960
checksum verify failed on 17616722329600 found 59A10048 wanted 83E42342
bytenr mismatch, want=17616722329600, have=5711772815059134904
Couldn't read tree root
Could not open root, trying backup super
checksum verify failed on 17616722329600 found 59A10048 wanted 83E42342
checksum verify failed on 17616722329600 found 59A10048 wanted 83E42342
checksum verify failed on 17616722329600 found 4BD9E359 wanted F5EBE960
checksum verify failed on 17616722329600 found 59A10048 wanted 83E42342
bytenr mismatch, want=17616722329600, have=5711772815059134904
Couldn't read tree root
Could not open root, trying backup super
checksum verify failed on 17616722329600 found 59A10048 wanted 83E42342
checksum verify failed on 17616722329600 found 59A10048 wanted 83E42342
checksum verify failed on 17616722329600 found 4BD9E359 wanted F5EBE960
checksum verify failed on 17616722329600 found 59A10048 wanted 83E42342
bytenr mismatch, want=17616722329600, have=5711772815059134904
Couldn't read tree root
Could not open root, trying backup super

When I know to find a usable tree root I can restore the data by myself.

The web search braught me to a page which sais I have to use btrfs-find-tree but that command isn't implemented.

What can I do next?

I am familiar with Linux. Working with Linux since 2008.

Message 11 of 27
Retired_Member
Not applicable

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

Can I use this command?

#btrfs rescue zero-log --help
usage: btrfs rescue zero-log <device>

    Clear the tree log. Usable if it's corrupted and prevents mount.

Message 12 of 27
Retired_Member
Not applicable

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

I am a little step further.

I mounted /dev/md0 as /sysroot, and now I find btrfs-find-root in /sysroot/sbin.

When I execute the command I get the following:

-sh: /sysroot/sbin/btrfs-find-root: not found

I keep searching....

Message 13 of 27
mdgm-ntgr
NETGEAR Employee Retired

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

If you try and fix it yourself it is possible that you might only succeed in making the problem worse.

If you must try things yourself it's best to clone the disks first.

No, I would not run

# btrfs rescue zero-log

That could well bake in problems leading to the data definitely becoming unrecoverable. Appropriate checks have to be made before running commands.

Message 14 of 27
Retired_Member
Not applicable

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

Finally I got btrfs-find-root running.

I am now restoring my data. Smiley Very Happy

Because it's al large amount of data, I think it will run for a long time.

 

Message 15 of 27
mdgm-ntgr
NETGEAR Employee Retired

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

Yes it would take quite some time. The more data you have the longer it will take.

Message 16 of 27
Retired_Member
Not applicable

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

All my data has been restored succesfully.

Message 17 of 27
Ingvar
Aspirant

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

Hi.

I have a similar problem with the RN 104 When remove data we fell into a state assert_qgroups_optodate + 58 at the same permutation OS does not help. I understand you had a similar problem, you can step by step and in detail both decided. Since I did not get a part of the array data back up move. Do I understand correctly that in Linux you can restore the system? And we have to boot or you have another solution? Now I shake my fedora to collect raid5 on Links below it using btfrs.

 

I have the data at 4x 3TB drives in the Raid5.

Message 18 of 27
Retired_Member
Not applicable

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

You don't need an other system to restore your data.

The only thing you need is a volume where you can restore your data to.

Shut your NAS down. Start it up in Tech Support mode.
See http://kb.netgear.com/22891/How-do-I-access-the-boot-menu-on-my-ReadyNAS-104-204-214-or-314 how to do this.
Find out the IP address the NAS got from your DHCP. Now you can telnet to your NAS.

Login with the credentials mentioned here: http://netgear.nas-central.org/wiki/TechSupportMode

Now you can mount your RAID5.

There are multiple RAID configurations active.

First mount md0 somewhere. It's the Netgear OS.
Here you will find the BTRS tools.

When trying to mount the data volume, BTRFS will give you an error.

Use btrfs-find-root to find the latest saved rootid.

You cannot use btrs-find-root at once. You have to load the dependancies first.

Check here to find the dependancies:

http://ask.xmodulo.com/check-library-dependency-program-process-linux.html

You can find the  files btrs-find-root needs is on the md0 volume.

I thaught you can change the path to there, then btrfs-find-root will work.

After you found the latest rootid, you can restore most of the data with

btrfs restore -i -m -r <rootid> /path/to/raid5 /path/to/restore

For each command you can search in the internet for more detailed information

(mounting RAID5, use btrfs-progs, etc.)

Restoring can take a long time.

My system took about 3 to 4 days to restore all data.

Good luck and I hope you will get your data back.

 

 

Message 19 of 27
X-1
Aspirant
Aspirant

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

Hello-I'm having the same/similiar problem after trying to balance.  I recieved an out of memory error and the unit locked up.  When restarted I recieve the message in the subject line above.  After contacting Netgear with ticket # 28656957 and sending my logs, I was told that my data was likely ok.  Nothing was found to indicate the hard drives had any issues.  I was then told that I would need to purchase a repair contract for a whole year to help fix my issue.  I'm not happy that I have to do this, but I do want my data back and the unit back operational.  Prior to finding out how to purchase such a contract, we were disconnected.  I have been a loyal customer of Netgear for many years and they have always been easy to work with.  I use Netgear at both home and work with many years of success.  But I am growing frustrated with trying to contact Netgear to get this system back up and running.    I also shouldn't need to purchase anything more when all I did was try to rebalance my drives.  I'm going to call to see how I might purchase this contract, but I hope Netgear really looks into this issue with rebalancing and losing the volume.  I see many others are having this same or similiar problem.

Model: RN10400|ReadyNAS 100 Series 4-Bay (Diskless)
Message 20 of 27
StephenB
Guru

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm


@X-1 wrote:

I was then told that I would need to purchase a repair contract for a whole year to help fix my issue.  


They also offer per-incident support, which covers up to one hour of support time.  Ask why you can't do that.

Message 21 of 27
jak0lantash
Mentor

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm


@X-1 wrote:

I see many others are having this same or similiar problem.


This error message is very confusing, but it's also kind of a catch-all. So many users may encounter the exact same message for completely different reasons.

Maybe you want to upvote this "idea": https://community.netgear.com/t5/Idea-Exchange-for-ReadyNAS/Change-the-incredibly-confusing-error-me...

 


@X-1 wrote:

I was then told that I would need to purchase a repair contract for a whole year to help fix my issue.  


(Usually) If the unit is out of Support entitlement, you need a Support contract to receive Support (investigate the issue), which is either a Per Incident or a yearly contract. But Data Recovery is out of scope of Support and isn't covered by a Support contract, it's a separate contract. I'd advise you to clarify this with Support.

 

If you want to download the logs from the GUI, look for md127 in dmesg.log and paste an extract here. I can have a look if you want.

Message 22 of 27
jak0lantash
Mentor

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm


@StephenB wrote:

@X-1 wrote:

I was then told that I would need to purchase a repair contract for a whole year to help fix my issue.  


They also offer per-incident support, which covers up to one hour of support time.  Ask why you can't do that.


I think the Per Incident isn't limited in Support time, it covers one Support case (one problem).

The Data Recovery contract is technically limited in Support time.

Message 23 of 27
StephenB
Guru

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm


@jak0lantash wrote:


I think the Per Incident isn't limited in Support time, it covers one Support case (one problem).

 


I was told it was, but perhaps my source was mistaken.

Message 24 of 27
X-1
Aspirant
Aspirant

Re: Please remove inactive volumes in order to use the disk. Disk #1,2,3,4 - Attn: mdgm

Thanks for your reply.  I agree with your post on the message being confusing and potentially leading users to remove disks.  Therefore, I have upvoted as requested.

 

I think I found the section you asked to see.  I hope you find something good there.  I have been on the phone for over an hour just to speak with someone about my options to pay. I may soon need to give up and try again tomorrow.

 

[Tue Jul 4 23:08:57 2017] md: md127 stopped.
[Tue Jul 4 23:08:57 2017] md: bind<sdb3>
[Tue Jul 4 23:08:57 2017] md: bind<sdc3>
[Tue Jul 4 23:08:57 2017] md: bind<sdd3>
[Tue Jul 4 23:08:57 2017] md: bind<sda3>
[Tue Jul 4 23:08:57 2017] md/raid:md127: device sda3 operational as raid disk 0
[Tue Jul 4 23:08:57 2017] md/raid:md127: device sdd3 operational as raid disk 3
[Tue Jul 4 23:08:57 2017] md/raid:md127: device sdc3 operational as raid disk 2
[Tue Jul 4 23:08:57 2017] md/raid:md127: device sdb3 operational as raid disk 1
[Tue Jul 4 23:08:57 2017] md/raid:md127: allocated 4294kB
[Tue Jul 4 23:08:57 2017] md/raid:md127: raid level 5 active with 4 out of 4 devices, algorithm 2
[Tue Jul 4 23:08:57 2017] RAID conf printout:
[Tue Jul 4 23:08:57 2017] --- level:5 rd:4 wd:4
[Tue Jul 4 23:08:57 2017] disk 0, o:1, dev:sda3
[Tue Jul 4 23:08:57 2017] disk 1, o:1, dev:sdb3
[Tue Jul 4 23:08:57 2017] disk 2, o:1, dev:sdc3
[Tue Jul 4 23:08:57 2017] disk 3, o:1, dev:sdd3
[Tue Jul 4 23:08:57 2017] md127: detected capacity change from 0 to 11987456360448
[Tue Jul 4 23:08:57 2017] systemd-journald[1062]: Received request to flush runtime journal from PID 1
[Tue Jul 4 23:08:58 2017] BTRFS: device label 2fe50eac:data devid 1 transid 423264 /dev/md127
[Tue Jul 4 23:08:58 2017] Adding 1047420k swap on /dev/md1. Priority:-1 extents:1 across:1047420k
[Tue Jul 4 23:11:40 2017] systemd invoked oom-killer: gfp_mask=0x24200ca, order=0, oom_score_adj=0
[Tue Jul 4 23:11:40 2017] systemd cpuset=/ mems_allowed=0
[Tue Jul 4 23:11:40 2017] CPU: 0 PID: 1 Comm: systemd Tainted: P O 4.4.68.armada.1 #1
[Tue Jul 4 23:11:40 2017] Hardware name: Marvell Armada 370/XP (Device Tree)
[Tue Jul 4 23:11:40 2017] [<c0015f24>] (unwind_backtrace) from [<c00120dc>] (show_stack+0x10/0x18)
[Tue Jul 4 23:11:40 2017] [<c00120dc>] (show_stack) from [<c0391960>] (dump_stack+0x78/0x9c)
[Tue Jul 4 23:11:40 2017] [<c0391960>] (dump_stack) from [<c00d4478>] (dump_header+0x48/0x1ac)
[Tue Jul 4 23:11:40 2017] [<c00d4478>] (dump_header) from [<c009d1f4>] (oom_kill_process+0x1f8/0x478)
[Tue Jul 4 23:11:40 2017] [<c009d1f4>] (oom_kill_process) from [<c009d750>] (out_of_memory+0x26c/0x358)
[Tue Jul 4 23:11:40 2017] [<c009d750>] (out_of_memory) from [<c00a15b0>] (__alloc_pages_nodemask+0x720/0x884)
[Tue Jul 4 23:11:40 2017] [<c00a15b0>] (__alloc_pages_nodemask) from [<c00ca698>] (__read_swap_cache_async+0x108/0x1cc)
[Tue Jul 4 23:11:40 2017] [<c00ca698>] (__read_swap_cache_async) from [<c00ca770>] (read_swap_cache_async+0x14/0x38)
[Tue Jul 4 23:11:40 2017] [<c00ca770>] (read_swap_cache_async) from [<c00ca8d4>] (swapin_readahead+0x140/0x198)
[Tue Jul 4 23:11:40 2017] [<c00ca8d4>] (swapin_readahead) from [<c00bbb40>] (handle_mm_fault+0x9a4/0xce8)
[Tue Jul 4 23:11:40 2017] [<c00bbb40>] (handle_mm_fault) from [<c0018ba4>] (do_page_fault+0x208/0x2a4)
[Tue Jul 4 23:11:40 2017] [<c0018ba4>] (do_page_fault) from [<c00092d8>] (do_DataAbort+0x34/0xbc)
[Tue Jul 4 23:11:40 2017] [<c00092d8>] (do_DataAbort) from [<c0012dfc>] (__dabt_usr+0x3c/0x40)
[Tue Jul 4 23:11:40 2017] Exception stack(0xdf43bfb0 to 0xdf43bff8)
[Tue Jul 4 23:11:40 2017] bfa0: 16deb461 00000000 000003e8 00000000
[Tue Jul 4 23:11:40 2017] bfc0: 000003e8 bebab820 16deb461 00000000 00000000 00000001 00000004 bebabaf4
[Tue Jul 4 23:11:40 2017] bfe0: b6c9f14c bebab800 b6c8c570 b6c7e14c 20000010 ffffffff
[Tue Jul 4 23:11:40 2017] Mem-Info:
[Tue Jul 4 23:11:40 2017] active_anon:6 inactive_anon:6 isolated_anon:0
active_file:18013 inactive_file:18078 isolated_file:64
unevictable:0 dirty:18144 writeback:0 unstable:0
slab_reclaimable:1243 slab_unreclaimable:57550
mapped:158 shmem:0 pagetables:37 bounce:0
free:4072 free_pcp:33 free_cma:0
[Tue Jul 4 23:11:40 2017] Normal free:16288kB min:16384kB low:20480kB high:24576kB active_anon:24kB inactive_anon:24kB active_file:72052kB inactive_file:72312kB unevictable:0kB isolated(anon):0kB isolated(file):256kB present:524288kB managed:508420kB mlocked:0kB dirty:72576kB writeback:0kB mapped:632kB shmem:0kB slab_reclaimable:4972kB slab_unreclaimable:230200kB kernel_stack:888kB pagetables:148kB unstable:0kB bounce:0kB free_pcp:132kB local_pcp:132kB free_cma:0kB writeback_tmp:0kB pages_scanned:900564 all_unreclaimable? yes
[Tue Jul 4 23:11:40 2017] lowmem_reserve[]: 0 0 0
[Tue Jul 4 23:11:40 2017] Normal: 56*4kB (UME) 86*8kB (ME) 89*16kB (ME) 48*32kB (ME) 36*64kB (UME) 23*128kB (UME) 4*256kB (M) 4*512kB (M) 2*1024kB (M) 1*2048kB (M) 0*4096kB = 16288kB
[Tue Jul 4 23:11:40 2017] 36164 total pagecache pages
[Tue Jul 4 23:11:40 2017] 4 pages in swap cache
[Tue Jul 4 23:11:40 2017] Swap cache stats: add 746, delete 742, find 114/171
[Tue Jul 4 23:11:40 2017] Free swap = 1044980kB
[Tue Jul 4 23:11:40 2017] Total swap = 1047420kB
[Tue Jul 4 23:11:40 2017] 131072 pages RAM
[Tue Jul 4 23:11:40 2017] 0 pages HighMem/MovableOnly
[Tue Jul 4 23:11:40 2017] 3967 pages reserved
[Tue Jul 4 23:11:40 2017] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
[Tue Jul 4 23:11:40 2017] [ 1062] 0 1062 5663 110 11 0 81 0 systemd-journal
[Tue Jul 4 23:11:40 2017] [ 1063] 0 1063 440 59 5 0 9 0 pilgrim
[Tue Jul 4 23:11:40 2017] [ 1095] 0 1095 2416 71 7 0 163 -1000 systemd-udevd
[Tue Jul 4 23:11:40 2017] [ 1133] 0 1133 1351 54 6 0 39 0 mount
[Tue Jul 4 23:11:40 2017] Out of memory: Kill process 1062 (systemd-journal) score 0 or sacrifice child
[Tue Jul 4 23:11:40 2017] Killed process 1062 (systemd-journal) total-vm:22652kB, anon-rss:0kB, file-rss:440kB
[Tue Jul 4 23:11:41 2017] pilgrim invoked oom-killer: gfp_mask=0x24200ca, order=0, oom_score_adj=0
[Tue Jul 4 23:11:41 2017] pilgrim cpuset=/ mems_allowed=0
[Tue Jul 4 23:11:41 2017] CPU: 0 PID: 1063 Comm: pilgrim Tainted: P O 4.4.68.armada.1 #1
[Tue Jul 4 23:11:41 2017] Hardware name: Marvell Armada 370/XP (Device Tree)
[Tue Jul 4 23:11:41 2017] [<c0015f24>] (unwind_backtrace) from [<c00120dc>] (show_stack+0x10/0x18)
[Tue Jul 4 23:11:41 2017] [<c00120dc>] (show_stack) from [<c0391960>] (dump_stack+0x78/0x9c)
[Tue Jul 4 23:11:41 2017] [<c0391960>] (dump_stack) from [<c00d4478>] (dump_header+0x48/0x1ac)
[Tue Jul 4 23:11:41 2017] [<c00d4478>] (dump_header) from [<c009d1f4>] (oom_kill_process+0x1f8/0x478)
[Tue Jul 4 23:11:41 2017] [<c009d1f4>] (oom_kill_process) from [<c009d750>] (out_of_memory+0x26c/0x358)
[Tue Jul 4 23:11:41 2017] [<c009d750>] (out_of_memory) from [<c00a15b0>] (__alloc_pages_nodemask+0x720/0x884)
[Tue Jul 4 23:11:41 2017] [<c00a15b0>] (__alloc_pages_nodemask) from [<c00ca698>] (__read_swap_cache_async+0x108/0x1cc)
[Tue Jul 4 23:11:41 2017] [<c00ca698>] (__read_swap_cache_async) from [<c00ca770>] (read_swap_cache_async+0x14/0x38)
[Tue Jul 4 23:11:41 2017] [<c00ca770>] (read_swap_cache_async) from [<c00ca884>] (swapin_readahead+0xf0/0x198)
[Tue Jul 4 23:11:41 2017] [<c00ca884>] (swapin_readahead) from [<c00bbb40>] (handle_mm_fault+0x9a4/0xce8)
[Tue Jul 4 23:11:41 2017] [<c00bbb40>] (handle_mm_fault) from [<c0018ba4>] (do_page_fault+0x208/0x2a4)
[Tue Jul 4 23:11:41 2017] [<c0018ba4>] (do_page_fault) from [<c00092d8>] (do_DataAbort+0x34/0xbc)
[Tue Jul 4 23:11:41 2017] [<c00092d8>] (do_DataAbort) from [<c0012b38>] (__dabt_svc+0x38/0x60)
[Tue Jul 4 23:11:41 2017] Exception stack(0xdb423db0 to 0xdb423df8)
[Tue Jul 4 23:11:41 2017] 3da0: be8309f0 db423e38 ffffffe4 00000000
[Tue Jul 4 23:11:41 2017] 3dc0: be8309f0 00000000 00000004 db423e28 db423e28 00000000 00000000 db423f70
[Tue Jul 4 23:11:41 2017] 3de0: 0000001c db423e04 00000000 c038eab4 00000113 ffffffff
[Tue Jul 4 23:11:41 2017] [<c0012b38>] (__dabt_svc) from [<c038eab4>] (__copy_to_user_std+0xd4/0x3c4)
[Tue Jul 4 23:11:41 2017] [<c038eab4>] (__copy_to_user_std) from [<c00e88a4>] (core_sys_select+0x190/0x32c)
[Tue Jul 4 23:11:41 2017] [<c00e88a4>] (core_sys_select) from [<c00e8afc>] (SyS_select+0xbc/0x110)
[Tue Jul 4 23:11:41 2017] [<c00e8afc>] (SyS_select) from [<c000f4c0>] (ret_fast_syscall+0x0/0x34)
[Tue Jul 4 23:11:41 2017] Mem-Info:
[Tue Jul 4 23:11:41 2017] active_anon:2 inactive_anon:2 isolated_anon:0
active_file:18008 inactive_file:18077 isolated_file:64
unevictable:0 dirty:18144 writeback:0 unstable:0
slab_reclaimable:1239 slab_unreclaimable:57567
mapped:153 shmem:0 pagetables:26 bounce:0
free:4072 free_pcp:49 free_cma:0
[Tue Jul 4 23:11:41 2017] Normal free:16288kB min:16384kB low:20480kB high:24576kB active_anon:8kB inactive_anon:8kB active_file:72032kB inactive_file:72308kB unevictable:0kB isolated(anon):0kB isolated(file):256kB present:524288kB managed:508420kB mlocked:0kB dirty:72576kB writeback:0kB mapped:612kB shmem:0kB slab_reclaimable:4956kB slab_unreclaimable:230268kB kernel_stack:888kB pagetables:104kB unstable:0kB bounce:0kB free_pcp:196kB local_pcp:196kB free_cma:0kB writeback_tmp:0kB pages_scanned:914764 all_unreclaimable? yes
[Tue Jul 4 23:11:41 2017] lowmem_reserve[]: 0 0 0
[Tue Jul 4 23:11:41 2017] Normal: 58*4kB (ME) 87*8kB (UME) 88*16kB (UME) 48*32kB (ME) 34*64kB (UME) 22*128kB (UME) 5*256kB (M) 4*512kB (M) 2*1024kB (M) 1*2048kB (M) 0*4096kB = 16288kB
[Tue Jul 4 23:11:41 2017] 36156 total pagecache pages
[Tue Jul 4 23:11:41 2017] 4 pages in swap cache
[Tue Jul 4 23:11:41 2017] Swap cache stats: add 793, delete 789, find 145/231
[Tue Jul 4 23:11:41 2017] Free swap = 1045284kB
[Tue Jul 4 23:11:41 2017] Total swap = 1047420kB
[Tue Jul 4 23:11:41 2017] 131072 pages RAM
[Tue Jul 4 23:11:41 2017] 0 pages HighMem/MovableOnly
[Tue Jul 4 23:11:41 2017] 3967 pages reserved
[Tue Jul 4 23:11:41 2017] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
[Tue Jul 4 23:11:41 2017] [ 1063] 0 1063 440 52 5 0 16 0 pilgrim
[Tue Jul 4 23:11:41 2017] [ 1095] 0 1095 2416 71 7 0 163 -1000 systemd-udevd
[Tue Jul 4 23:11:41 2017] [ 1133] 0 1133 1351 54 6 0 39 0 mount
[Tue Jul 4 23:11:41 2017] Out of memory: Kill process 1133 (mount) score 0 or sacrifice child
[Tue Jul 4 23:11:41 2017] Killed process 1133 (mount) total-vm:5404kB, anon-rss:0kB, file-rss:216kB
[Tue Jul 4 23:12:03 2017] mount: page allocation failure: order:0, mode:0x2600040
[Tue Jul 4 23:12:03 2017] CPU: 0 PID: 1133 Comm: mount Tainted: P O 4.4.68.armada.1 #1
[Tue Jul 4 23:12:03 2017] Hardware name: Marvell Armada 370/XP (Device Tree)
[Tue Jul 4 23:12:03 2017] [<c0015f24>] (unwind_backtrace) from [<c00120dc>] (show_stack+0x10/0x18)
[Tue Jul 4 23:12:03 2017] [<c00120dc>] (show_stack) from [<c0391960>] (dump_stack+0x78/0x9c)
[Tue Jul 4 23:12:03 2017] [<c0391960>] (dump_stack) from [<c009ed60>] (warn_alloc_failed+0xdc/0x120)
[Tue Jul 4 23:12:03 2017] [<c009ed60>] (warn_alloc_failed) from [<c00a1494>] (__alloc_pages_nodemask+0x604/0x884)
[Tue Jul 4 23:12:03 2017] [<c00a1494>] (__alloc_pages_nodemask) from [<c00d07c8>] (allocate_slab+0x28c/0x2c4)
[Tue Jul 4 23:12:03 2017] [<c00d07c8>] (allocate_slab) from [<c00d1f38>] (___slab_alloc.constprop.13+0x230/0x378)
[Tue Jul 4 23:12:03 2017] [<c00d1f38>] (___slab_alloc.constprop.13) from [<c00d24c0>] (kmem_cache_alloc+0x158/0x16c)
[Tue Jul 4 23:12:03 2017] [<c00d24c0>] (kmem_cache_alloc) from [<c0313d1c>] (ulist_alloc+0x1c/0x54)
[Tue Jul 4 23:12:03 2017] [<c0313d1c>] (ulist_alloc) from [<c0310b50>] (__resolve_indirect_refs+0x1c/0x720)
[Tue Jul 4 23:12:03 2017] [<c0310b50>] (__resolve_indirect_refs) from [<c03125e8>] (find_parent_nodes+0x59c/0x930)
[Tue Jul 4 23:12:03 2017] [<c03125e8>] (find_parent_nodes) from [<c0312a38>] (__btrfs_find_all_roots+0xbc/0x118)
[Tue Jul 4 23:12:03 2017] [<c0312a38>] (__btrfs_find_all_roots) from [<c0312b10>] (btrfs_find_all_roots+0x60/0x7c)
[Tue Jul 4 23:12:03 2017] [<c0312b10>] (btrfs_find_all_roots) from [<c0316a4c>] (btrfs_qgroup_prepare_account_extents+0x64/0xa8)
[Tue Jul 4 23:12:03 2017] [<c0316a4c>] (btrfs_qgroup_prepare_account_extents) from [<c02a3850>] (btrfs_commit_transaction+0x5e0/0xbc0)
[Tue Jul 4 23:12:03 2017] [<c02a3850>] (btrfs_commit_transaction) from [<c0302f40>] (btrfs_recover_relocation+0x360/0x38c)
[Tue Jul 4 23:12:03 2017] [<c0302f40>] (btrfs_recover_relocation) from [<c02a0764>] (open_ctree+0x1f70/0x2350)
[Tue Jul 4 23:12:03 2017] [<c02a0764>] (open_ctree) from [<c0274f14>] (btrfs_mount+0x5d4/0x704)
[Tue Jul 4 23:12:03 2017] [<c0274f14>] (btrfs_mount) from [<c00da644>] (mount_fs+0x44/0x15c)
[Tue Jul 4 23:12:03 2017] [<c00da644>] (mount_fs) from [<c00f3280>] (vfs_kern_mount+0x4c/0xf0)
[Tue Jul 4 23:12:03 2017] [<c00f3280>] (vfs_kern_mount) from [<c0274180>] (mount_subvol+0x10c/0x8cc)
[Tue Jul 4 23:12:03 2017] [<c0274180>] (mount_subvol) from [<c0274b24>] (btrfs_mount+0x1e4/0x704)
[Tue Jul 4 23:12:03 2017] [<c0274b24>] (btrfs_mount) from [<c00da644>] (mount_fs+0x44/0x15c)
[Tue Jul 4 23:12:03 2017] [<c00da644>] (mount_fs) from [<c00f3280>] (vfs_kern_mount+0x4c/0xf0)
[Tue Jul 4 23:12:03 2017] [<c00f3280>] (vfs_kern_mount) from [<c00f59c8>] (do_mount+0x1c8/0xc28)
[Tue Jul 4 23:12:03 2017] [<c00f59c8>] (do_mount) from [<c00f6798>] (SyS_mount+0x78/0xa8)
[Tue Jul 4 23:12:03 2017] [<c00f6798>] (SyS_mount) from [<c000f4c0>] (ret_fast_syscall+0x0/0x34)
[Tue Jul 4 23:12:03 2017] Mem-Info:
[Tue Jul 4 23:12:03 2017] active_anon:2 inactive_anon:2 isolated_anon:0
active_file:18145 inactive_file:18151 isolated_file:0
unevictable:0 dirty:18144 writeback:0 unstable:0
slab_reclaimable:1239 slab_unreclaimable:61549
mapped:153 shmem:0 pagetables:26 bounce:0
free:0 free_pcp:0 free_cma:0
[Tue Jul 4 23:12:03 2017] Normal free:0kB min:16384kB low:20480kB high:24576kB active_anon:8kB inactive_anon:8kB active_file:72580kB inactive_file:72604kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:524288kB managed:508420kB mlocked:0kB dirty:72576kB writeback:0kB mapped:612kB shmem:0kB slab_reclaimable:4956kB slab_unreclaimable:246196kB kernel_stack:888kB pagetables:104kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:7418612 all_unreclaimable? yes
[Tue Jul 4 23:12:03 2017] lowmem_reserve[]: 0 0 0
[Tue Jul 4 23:12:03 2017] Normal: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 0kB
[Tue Jul 4 23:12:03 2017] 36296 total pagecache pages
[Tue Jul 4 23:12:03 2017] 4 pages in swap cache
[Tue Jul 4 23:12:03 2017] Swap cache stats: add 793, delete 789, find 145/231
[Tue Jul 4 23:12:03 2017] Free swap = 1045284kB
[Tue Jul 4 23:12:03 2017] Total swap = 1047420kB
[Tue Jul 4 23:12:03 2017] 131072 pages RAM
[Tue Jul 4 23:12:03 2017] 0 pages HighMem/MovableOnly
[Tue Jul 4 23:12:03 2017] 3967 pages reserved
[Tue Jul 4 23:12:03 2017] SLUB: Unable to allocate memory on node -1 (gfp=0x2400040)
[Tue Jul 4 23:12:03 2017] cache: kmalloc-64, object size: 64, buffer size: 64, default order: 0, min order: 0
[Tue Jul 4 23:12:03 2017] node 0: slabs: 56031, objs: 3585984, free: 0
[Tue Jul 4 23:12:03 2017] BTRFS warning (device md127): Skipping commit of aborted transaction.
[Tue Jul 4 23:12:03 2017] ------------[ cut here ]------------
[Tue Jul 4 23:12:03 2017] WARNING: CPU: 0 PID: 1133 at fs/btrfs/transaction.c:1855 btrfs_commit_transaction+0xa1c/0xbc0()
[Tue Jul 4 23:12:03 2017] BTRFS: Transaction aborted (error -12)
[Tue Jul 4 23:12:03 2017] Modules linked in: vpd(PO)
[Tue Jul 4 23:12:03 2017] CPU: 0 PID: 1133 Comm: mount Tainted: P O 4.4.68.armada.1 #1
[Tue Jul 4 23:12:03 2017] Hardware name: Marvell Armada 370/XP (Device Tree)
[Tue Jul 4 23:12:03 2017] [<c0015f24>] (unwind_backtrace) from [<c00120dc>] (show_stack+0x10/0x18)
[Tue Jul 4 23:12:03 2017] [<c00120dc>] (show_stack) from [<c0391960>] (dump_stack+0x78/0x9c)
[Tue Jul 4 23:12:03 2017] [<c0391960>] (dump_stack) from [<c002488c>] (warn_slowpath_common+0x74/0xac)
[Tue Jul 4 23:12:03 2017] [<c002488c>] (warn_slowpath_common) from [<c002495c>] (warn_slowpath_fmt+0x30/0x40)
[Tue Jul 4 23:12:03 2017] [<c002495c>] (warn_slowpath_fmt) from [<c02a3c8c>] (btrfs_commit_transaction+0xa1c/0xbc0)
[Tue Jul 4 23:12:03 2017] [<c02a3c8c>] (btrfs_commit_transaction) from [<c0302f40>] (btrfs_recover_relocation+0x360/0x38c)
[Tue Jul 4 23:12:03 2017] [<c0302f40>] (btrfs_recover_relocation) from [<c02a0764>] (open_ctree+0x1f70/0x2350)
[Tue Jul 4 23:12:03 2017] [<c02a0764>] (open_ctree) from [<c0274f14>] (btrfs_mount+0x5d4/0x704)
[Tue Jul 4 23:12:03 2017] [<c0274f14>] (btrfs_mount) from [<c00da644>] (mount_fs+0x44/0x15c)
[Tue Jul 4 23:12:03 2017] [<c00da644>] (mount_fs) from [<c00f3280>] (vfs_kern_mount+0x4c/0xf0)
[Tue Jul 4 23:12:03 2017] [<c00f3280>] (vfs_kern_mount) from [<c0274180>] (mount_subvol+0x10c/0x8cc)
[Tue Jul 4 23:12:03 2017] [<c0274180>] (mount_subvol) from [<c0274b24>] (btrfs_mount+0x1e4/0x704)
[Tue Jul 4 23:12:03 2017] [<c0274b24>] (btrfs_mount) from [<c00da644>] (mount_fs+0x44/0x15c)
[Tue Jul 4 23:12:03 2017] [<c00da644>] (mount_fs) from [<c00f3280>] (vfs_kern_mount+0x4c/0xf0)
[Tue Jul 4 23:12:03 2017] [<c00f3280>] (vfs_kern_mount) from [<c00f59c8>] (do_mount+0x1c8/0xc28)
[Tue Jul 4 23:12:03 2017] [<c00f59c8>] (do_mount) from [<c00f6798>] (SyS_mount+0x78/0xa8)
[Tue Jul 4 23:12:03 2017] [<c00f6798>] (SyS_mount) from [<c000f4c0>] (ret_fast_syscall+0x0/0x34)
[Tue Jul 4 23:12:03 2017] ---[ end trace 42120c8e1437ba1a ]---
[Tue Jul 4 23:12:03 2017] BTRFS: error (device md127) in cleanup_transaction:1855: errno=-12 Out of memory
[Tue Jul 4 23:12:03 2017] BTRFS info (device md127): delayed_refs has NO entry
[Tue Jul 4 23:12:03 2017] BTRFS warning (device md127): failed to recover relocation: -12
[Tue Jul 4 23:12:03 2017] BTRFS error (device md127): cleaner transaction attach returned -30
[Tue Jul 4 23:12:03 2017] BTRFS error (device md127): open_ctree failed

Message 25 of 27
Top Contributors
Discussion stats
  • 26 replies
  • 5886 views
  • 0 kudos
  • 7 in conversation
Announcements