× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Repeating GPF

wastedyouth1
Aspirant

Repeating GPF

Hi

I was wondering if anyone could advise on the cause of the GPF's I'm seeing in the syslog of my ReadyNas Pro. These happen once a week at least an normally result in my having to powercycle the NAS.
Extract from syslog below

Nov 15 16:01:10 Nassy2 kernel: alloc_fd: slot 5 not NULL!
Nov 15 16:01:12 Nassy2 kernel: general protection fault: 0000 [#3] SMP
Nov 15 16:01:12 Nassy2 kernel: last sysfs file: /sys/devices/virtual/block/md3/md/sync_action
Nov 15 16:01:12 Nassy2 kernel: CPU 1
Nov 15 16:01:12 Nassy2 kernel: Modules linked in: pvgpio nv6vpd(P)
Nov 15 16:01:12 Nassy2 kernel:
Nov 15 16:01:12 Nassy2 kernel: Pid: 17596, comm: grep Tainted: P D 2.6.37.6.RNx86_64.2.4 #1 NETGEAR ReadyNAS/
Nov 15 16:01:12 Nassy2 kernel: RIP: 0010:[<ffffffff880ac8c7>] [<ffffffff880ac8c7>] filp_close+0x17/0x90
Nov 15 16:01:12 Nassy2 kernel: RSP: 0000:ffff88007094df28 EFLAGS: 00010286
Nov 15 16:01:12 Nassy2 kernel: RAX: fffffffffffffffd RBX: 0400c8328bf591e0 RCX: 0000000000000001
Nov 15 16:01:12 Nassy2 kernel: RDX: 0000000000000000 RSI: ffff880077c13440 RDI: 0400c8328bf591e0
Nov 15 16:01:12 Nassy2 kernel: RBP: ffff88007094df48 R08: 0000000000000001 R09: 00000000fffce810
Nov 15 16:01:12 Nassy2 kernel: R10: ffff88007094c000 R11: 0000000000000000 R12: ffff880077c134c0
Nov 15 16:01:12 Nassy2 kernel: R13: 0000000000000001 R14: 0400c8328bf591e0 R15: 0000000000000000
Nov 15 16:01:12 Nassy2 kernel: FS: 0000000000000000(0000) GS:ffff88007ee80000(0063) knlGS:00000000f764e6b0
Nov 15 16:01:12 Nassy2 kernel: CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
Nov 15 16:01:12 Nassy2 kernel: CR2: 00000000f76b55d0 CR3: 0000000070b49000 CR4: 00000000000006e0
Nov 15 16:01:12 Nassy2 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Nov 15 16:01:12 Nassy2 kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Nov 15 16:01:12 Nassy2 kernel: Process grep (pid: 17596, threadinfo ffff88007094c000, task ffff88007083db00)
Nov 15 16:01:12 Nassy2 kernel: Stack:
Nov 15 16:01:12 Nassy2 kernel: 0000000000000000 ffff880077c13440 ffff880077c134c0 0000000000000001
Nov 15 16:01:12 Nassy2 kernel: ffff88007094df78 ffffffff880adae4 0000000000000001 0000000000000000
Nov 15 16:01:12 Nassy2 kernel: 0000000000000000 0000000000000000 00000000fffce810 ffffffff88024c53
Nov 15 16:01:12 Nassy2 kernel: Call Trace:
Nov 15 16:01:12 Nassy2 kernel: [<ffffffff880adae4>] sys_close+0xa4/0x100
Nov 15 16:01:12 Nassy2 kernel: [<ffffffff88024c53>] ia32_sysret+0x0/0x5
Nov 15 16:01:12 Nassy2 kernel: Code: 83 80 00 00 00 48 8b 1c 24 4c 8b 64 24 08 c9 c3 66 66 66 90 55 48 89 e5 48 83 ec 20 48 89 5d e8 4c 89 65 f0 48 89 fb 4c 89 6d f8 <48> 8b 47 30 49 89 f4 48 85 c0 74 52 48 8b 47 20 48 85 c0 74 44
Nov 15 16:01:12 Nassy2 kernel: RIP [<ffffffff880ac8c7>] filp_close+0x17/0x90
Nov 15 16:01:12 Nassy2 kernel: RSP <ffff88007094df28>
Nov 15 16:01:12 Nassy2 kernel: ---[ end trace 2c9ff00a9fb03955 ]---
Nov 15 16:01:14 Nassy2 kernel: general protection fault: 0000 [#4] SMP
Nov 15 16:01:14 Nassy2 kernel: last sysfs file: /sys/devices/virtual/block/md3/md/sync_action
Nov 15 16:01:14 Nassy2 kernel: CPU 3
Nov 15 16:01:14 Nassy2 kernel: Modules linked in: pvgpio nv6vpd(P)
Nov 15 16:01:14 Nassy2 kernel:
Nov 15 16:01:14 Nassy2 kernel: Pid: 17596, comm: grep Tainted: P D 2.6.37.6.RNx86_64.2.4 #1 NETGEAR ReadyNAS/
Nov 15 16:01:14 Nassy2 kernel: RIP: 0010:[<ffffffff880ac8c7>] [<ffffffff880ac8c7>] filp_close+0x17/0x90
Nov 15 16:01:14 Nassy2 kernel: RSP: 0000:ffff88007094dcd8 EFLAGS: 00010286
Nov 15 16:01:14 Nassy2 kernel: RAX: ffff880077dff010 RBX: 01000608c5620620 RCX: 0000000000000000
Nov 15 16:01:14 Nassy2 kernel: RDX: 0000000000000000 RSI: ffff880077c13440 RDI: 01000608c5620620
Nov 15 16:01:14 Nassy2 kernel: RBP: ffff88007094dcf8 R08: ffff88007094c000 R09: 0000000000000001
Nov 15 16:01:14 Nassy2 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff880077c13440
Nov 15 16:01:14 Nassy2 kernel: R13: 0000000000000000 R14: ffff88000c6d8ac0 R15: 0000000000000010
Nov 15 16:01:14 Nassy2 kernel: FS: 0000000000000000(0000) GS:ffff88007ef80000(0000) knlGS:0000000000000000
Nov 15 16:01:14 Nassy2 kernel: CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
Nov 15 16:01:14 Nassy2 kernel: CR2: 00000000f776df05 CR3: 0000000071345000 CR4: 00000000000006e0
Nov 15 16:01:14 Nassy2 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Nov 15 16:01:14 Nassy2 kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Nov 15 16:01:14 Nassy2 kernel: Process grep (pid: 17596, threadinfo ffff88007094c000, task ffff88007083db00)
Nov 15 16:01:14 Nassy2 kernel: Stack:
Nov 15 16:01:14 Nassy2 kernel: ffff8800784a2300 000000000000007f ffff880077c13440 0000000000000000
Nov 15 16:01:14 Nassy2 kernel: ffff88007094dd38 ffffffff88036843 0000000000000000 ffff88007083db00
Nov 15 16:01:14 Nassy2 kernel: ffff880077c13440 ffff88007083db00 000000000000009c ffff88007e8fb840
Nov 15 16:01:14 Nassy2 kernel: Call Trace:
Nov 15 16:01:14 Nassy2 kernel: [<ffffffff88036843>] put_files_struct+0xc3/0xd0
Nov 15 16:01:14 Nassy2 kernel: [<ffffffff88036895>] exit_files+0x45/0x50
Nov 15 16:01:14 Nassy2 kernel: [<ffffffff88037a20>] do_exit+0x190/0x770
Nov 15 16:01:14 Nassy2 kernel: [<ffffffff88002bce>] ? apic_timer_interrupt+0xe/0x20
Nov 15 16:01:14 Nassy2 kernel: [<ffffffff88035fd0>] ? kmsg_dump+0x110/0x160
Nov 15 16:01:14 Nassy2 kernel: [<ffffffff88006506>] oops_end+0xa6/0xb0
Nov 15 16:01:14 Nassy2 kernel: [<ffffffff88006606>] die+0x56/0x90
Nov 15 16:01:14 Nassy2 kernel: [<ffffffff880040f2>] do_general_protection+0x152/0x160
Nov 15 16:01:14 Nassy2 kernel: [<ffffffff885b48ef>] general_protection+0x1f/0x30
Nov 15 16:01:14 Nassy2 kernel: [<ffffffff880ac8c7>] ? filp_close+0x17/0x90
Nov 15 16:01:14 Nassy2 kernel: [<ffffffff880adae4>] sys_close+0xa4/0x100
Nov 15 16:01:14 Nassy2 kernel: [<ffffffff88024c53>] ia32_sysret+0x0/0x5
Nov 15 16:01:14 Nassy2 kernel: Code: 83 80 00 00 00 48 8b 1c 24 4c 8b 64 24 08 c9 c3 66 66 66 90 55 48 89 e5 48 83 ec 20 48 89 5d e8 4c 89 65 f0 48 89 fb 4c 89 6d f8 <48> 8b 47 30 49 89 f4 48 85 c0 74 52 48 8b 47 20 48 85 c0 74 44
Nov 15 16:01:14 Nassy2 kernel: RIP [<ffffffff880ac8c7>] filp_close+0x17/0x90
Nov 15 16:01:14 Nassy2 kernel: RSP <ffff88007094dcd8>
Nov 15 16:01:14 Nassy2 kernel: ---[ end trace 2c9ff00a9fb03956 ]---
Nov 15 16:01:14 Nassy2 kernel: Fixing recursive fault but reboot is needed!


I'm hoping it's just a memory issue which can be easily resolved. Anyone recognise the problem?

Thanks in advance 🙂
Message 1 of 17
Andlier
Tutor

Re: Repeating GPF

I have the same issues with my readynas ultra 6 device:
Dec  9 23:31:06 AndersNAS kernel: general protection fault: 0000 [#1] SMP 
Dec 9 23:31:06 AndersNAS kernel: last sysfs file: /sys/devices/virtual/block/md2/md/sync_action
Dec 9 23:31:06 AndersNAS kernel: CPU 1
Dec 9 23:31:06 AndersNAS kernel: Modules linked in: pvgpio nv6lcd nv6vpd(P)
Dec 9 23:31:06 AndersNAS kernel:
Dec 9 23:31:06 AndersNAS kernel: Pid: 14828, comm: sleep Tainted: P 2.6.37.6.RNx86_64.2.4 #1 /
Dec 9 23:31:06 AndersNAS kernel: RIP: 0010:[<ffffffff880ac8c7>] [<ffffffff880ac8c7>] filp_close+0x17/0x90
Dec 9 23:31:06 AndersNAS kernel: RSP: 0000:ffff880038e8bf28 EFLAGS: 00010286
Dec 9 23:31:06 AndersNAS kernel: RAX: fffffffffffffffd RBX: fc985e97ea331f00 RCX: 0000000000000001
Dec 9 23:31:06 AndersNAS kernel: RDX: 0000000000000000 RSI: ffff880039114580 RDI: fc985e97ea331f00
Dec 9 23:31:06 AndersNAS kernel: RBP: ffff880038e8bf48 R08: 0000000000000001 R09: 00000000ffb35190
Dec 9 23:31:06 AndersNAS kernel: R10: ffff880038e8a000 R11: 0000000000000000 R12: ffff880039114600
Dec 9 23:31:06 AndersNAS kernel: R13: 0000000000000001 R14: fc985e97ea331f00 R15: 0000000000000000
Dec 9 23:31:06 AndersNAS kernel: FS: 0000000000000000(0000) GS:ffff88003f280000(0063) knlGS:00000000f75b78c0
Dec 9 23:31:06 AndersNAS kernel: CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
Dec 9 23:31:06 AndersNAS kernel: CR2: 00000000f761e5d0 CR3: 000000003d87e000 CR4: 00000000000006e0
Dec 9 23:31:06 AndersNAS kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Dec 9 23:31:06 AndersNAS kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Dec 9 23:31:06 AndersNAS kernel: Process sleep (pid: 14828, threadinfo ffff880038e8a000, task ffff88003d9638e0)
Dec 9 23:31:06 AndersNAS kernel: Stack:
Dec 9 23:31:06 AndersNAS kernel: 0000000000000000 ffff880039114580 ffff880039114600 0000000000000001
Dec 9 23:31:06 AndersNAS kernel: ffff880038e8bf78 ffffffff880adae4 0000000000000001 0000000000000000
Dec 9 23:31:06 AndersNAS kernel: 0000000000000000 0000000000000000 00000000ffb35190 ffffffff88024c53
Dec 9 23:31:06 AndersNAS kernel: Call Trace:
Dec 9 23:31:06 AndersNAS kernel: [<ffffffff880adae4>] sys_close+0xa4/0x100
Dec 9 23:31:06 AndersNAS kernel: [<ffffffff88024c53>] ia32_sysret+0x0/0x5
Dec 9 23:31:06 AndersNAS kernel: Code: 83 80 00 00 00 48 8b 1c 24 4c 8b 64 24 08 c9 c3 66 66 66 90 55 48 89 e5 48 83 ec 20 48 89 5d e8 4c 89 65 f0 48 89 fb 4c 89 6d f8 <48> 8b 47 30 49 89 f4 48 85 c0 74 52 48 8b 47 20 48 85 c0 74 44
Dec 9 23:31:06 AndersNAS kernel: RIP [<ffffffff880ac8c7>] filp_close+0x17/0x90
Dec 9 23:31:06 AndersNAS kernel: RSP <ffff880038e8bf28>
Dec 9 23:31:06 AndersNAS kernel: ---[ end trace 739a60f4dab3e474 ]---
Dec 9 23:31:08 AndersNAS kernel: general protection fault: 0000 [#2] SMP
Dec 9 23:31:08 AndersNAS kernel: last sysfs file: /sys/devices/virtual/block/md2/md/sync_action
Dec 9 23:31:08 AndersNAS kernel: CPU 3
Dec 9 23:31:08 AndersNAS kernel: Modules linked in: pvgpio nv6lcd nv6vpd(P)
Dec 9 23:31:08 AndersNAS kernel:
Dec 9 23:31:08 AndersNAS kernel: Pid: 14828, comm: sleep Tainted: P D 2.6.37.6.RNx86_64.2.4 #1 /
Dec 9 23:31:08 AndersNAS kernel: RIP: 0010:[<ffffffff880ac8c7>] [<ffffffff880ac8c7>] filp_close+0x17/0x90
Dec 9 23:31:09 AndersNAS kernel: RSP: 0000:ffff880038e8bcd8 EFLAGS: 00010286
Dec 9 23:31:09 AndersNAS kernel: RAX: ffff880001509818 RBX: fc98020004060008 RCX: 0000000000000000
Dec 9 23:31:09 AndersNAS kernel: RDX: 0000000000000000 RSI: ffff880039114580 RDI: fc98020004060008
Dec 9 23:31:09 AndersNAS kernel: RBP: ffff880038e8bcf8 R08: ffff880038e8a000 R09: 0000000000000001
Dec 9 23:31:09 AndersNAS kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff880039114580
Dec 9 23:31:09 AndersNAS kernel: R13: 0000000000000000 R14: ffff8800316ac600 R15: 0000000000000018
Dec 9 23:31:09 AndersNAS kernel: FS: 0000000000000000(0000) GS:ffff88003f380000(0000) knlGS:0000000000000000
Dec 9 23:31:09 AndersNAS kernel: CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
Dec 9 23:31:09 AndersNAS kernel: CR2: 00000000567b602c CR3: 000000003167b000 CR4: 00000000000006e0
Dec 9 23:31:09 AndersNAS kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Dec 9 23:31:09 AndersNAS kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Dec 9 23:31:09 AndersNAS kernel: Process sleep (pid: 14828, threadinfo ffff880038e8a000, task ffff88003d9638e0)
Dec 9 23:31:09 AndersNAS kernel: Stack:
Dec 9 23:31:09 AndersNAS kernel: ffff88003d3ac380 0000000000000039 ffff880039114580 0000000000000000
Dec 9 23:31:09 AndersNAS kernel: ffff880038e8bd38 ffffffff88036843 0000000000000000 ffff88003d9638e0
Dec 9 23:31:09 AndersNAS kernel: ffff880039114580 ffff88003d9638e0 0000000000000071 ffff88003930a1c0
Dec 9 23:31:09 AndersNAS kernel: Call Trace:
Dec 9 23:31:09 AndersNAS kernel: [<ffffffff88036843>] put_files_struct+0xc3/0xd0
Dec 9 23:31:09 AndersNAS kernel: [<ffffffff88036895>] exit_files+0x45/0x50
Dec 9 23:31:09 AndersNAS kernel: [<ffffffff88037a20>] do_exit+0x190/0x770
Dec 9 23:31:09 AndersNAS kernel: [<ffffffff88002bce>] ? apic_timer_interrupt+0xe/0x20
Dec 9 23:31:09 AndersNAS kernel: [<ffffffff88035fd0>] ? kmsg_dump+0x110/0x160
Dec 9 23:31:09 AndersNAS kernel: [<ffffffff88006506>] oops_end+0xa6/0xb0
Dec 9 23:31:09 AndersNAS kernel: [<ffffffff88006606>] die+0x56/0x90
Dec 9 23:31:09 AndersNAS kernel: [<ffffffff880040f2>] do_general_protection+0x152/0x160
Dec 9 23:31:09 AndersNAS kernel: [<ffffffff885b48ef>] general_protection+0x1f/0x30
Dec 9 23:31:09 AndersNAS kernel: [<ffffffff880ac8c7>] ? filp_close+0x17/0x90
Dec 9 23:31:09 AndersNAS kernel: [<ffffffff880adae4>] sys_close+0xa4/0x100
Dec 9 23:31:09 AndersNAS kernel: [<ffffffff88024c53>] ia32_sysret+0x0/0x5
Dec 9 23:31:09 AndersNAS kernel: Code: 83 80 00 00 00 48 8b 1c 24 4c 8b 64 24 08 c9 c3 66 66 66 90 55 48 89 e5 48 83 ec 20 48 89 5d e8 4c 89 65 f0 48 89 fb 4c 89 6d f8 <48> 8b 47 30 49 89 f4 48 85 c0 74 52 48 8b 47 20 48 85 c0 74 44
Dec 9 23:31:09 AndersNAS kernel: RIP [<ffffffff880ac8c7>] filp_close+0x17/0x90
Dec 9 23:31:09 AndersNAS kernel: RSP <ffff880038e8bcd8>
Dec 9 23:31:09 AndersNAS kernel: ---[ end trace 739a60f4dab3e475 ]---
Dec 9 23:31:09 AndersNAS kernel: Fixing recursive fault but reboot is needed!

I have already been in contact with support about this, they adviced me to do a factory reset, which I did. But the problem reappeared after about two months. I also have a readynas pro 2 which had the same problem, but that box has been fine after I did a factory reset a couple of months ago. The problems seemed to start for me when I updated to 4.2.22. I'm running the memory test now, plan to run it at least one week, no errors yet after 20 hours.
Message 2 of 17
apb1704
Aspirant

Re: Repeating GPF

I am getting the same problem with my Readynas pro 2 and RAIDiator 4.2.22

Jan 3 10:08:16 velvet kernel: general protection fault: 0000 [#2] SMP
Jan 3 10:08:16 velvet kernel: last sysfs file: /sys/devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda/removable
Jan 3 10:08:16 velvet kernel: CPU 1
Jan 3 10:08:16 velvet kernel: Modules linked in: pvgpio nv6vpd(P)
Jan 3 10:08:16 velvet kernel:
Jan 3 10:08:16 velvet kernel: Pid: 9479, comm: empty_exim Tainted: P D 2.6.37.6.RNx86_64.2.4 #1 NETGEAR ReadyNAS/
Jan 3 10:08:16 velvet kernel: RIP: 0010:[<ffffffff880ac8c7>] [<ffffffff880ac8c7>] filp_close+0x17/0x90
Jan 3 10:08:16 velvet kernel: RSP: 0000:ffff88003d8efb48 EFLAGS: 00010286
Jan 3 10:08:16 velvet kernel: RAX: ffff880037f7f008 RBX: ea74af878bf591e0 RCX: 0000000000000000
Jan 3 10:08:16 velvet kernel: RDX: 0000000000000000 RSI: ffff88003d9c4000 RDI: ea74af878bf591e0
Jan 3 10:08:16 velvet kernel: RBP: ffff88003d8efb68 R08: ffff88003d8ee000 R09: 0000000000000001
Jan 3 10:08:16 velvet kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff88003d9c4000
Jan 3 10:08:16 velvet kernel: R13: 0000000000000000 R14: ffff880037ed0180 R15: 0000000000000008
Jan 3 10:08:16 velvet kernel: FS: 0000000000000000(0000) GS:ffff88003f280000(0000) knlGS:0000000000000000
Jan 3 10:08:16 velvet kernel: CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
Jan 3 10:08:16 velvet kernel: CR2: 00000000f6ea1960 CR3: 0000000037c10000 CR4: 00000000000006e0
Jan 3 10:08:16 velvet kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Jan 3 10:08:16 velvet kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Jan 3 10:08:16 velvet kernel: Process empty_exim (pid: 9479, threadinfo ffff88003d8ee000, task ffff880038751110)
Jan 3 10:08:16 velvet kernel: Stack:
Jan 3 10:08:16 velvet kernel: ffff880037f8fb80 0000000000000003 ffff88003d9c4000 0000000000000000
Jan 3 10:08:16 velvet kernel: ffff88003d8efba8 ffffffff88036843 0000000000000000 ffff880038751110
Jan 3 10:08:16 velvet kernel: ffff88003d9c4000 ffff880038751110 00000000000000e2 ffff880038ed43c0
Jan 3 10:08:16 velvet kernel: Call Trace:
Jan 3 10:08:16 velvet kernel: [<ffffffff88036843>] put_files_struct+0xc3/0xd0
Jan 3 10:08:16 velvet kernel: [<ffffffff88036895>] exit_files+0x45/0x50
Jan 3 10:08:16 velvet kernel: [<ffffffff88037a20>] do_exit+0x190/0x770
Jan 3 10:08:16 velvet kernel: [<ffffffff88002bce>] ? apic_timer_interrupt+0xe/0x20
Jan 3 10:08:16 velvet kernel: [<ffffffff88035fd0>] ? kmsg_dump+0x110/0x160
Jan 3 10:08:16 velvet kernel: [<ffffffff88006506>] oops_end+0xa6/0xb0
Jan 3 10:08:16 velvet kernel: [<ffffffff88006606>] die+0x56/0x90
Jan 3 10:08:16 velvet kernel: [<ffffffff880040f2>] do_general_protection+0x152/0x160
Jan 3 10:08:16 velvet kernel: [<ffffffff885b48ef>] general_protection+0x1f/0x30
Jan 3 10:08:16 velvet kernel: [<ffffffff880c5535>] ? dup_fd+0x135/0x2a0
Jan 3 10:08:16 velvet kernel: [<ffffffff880c551a>] ? dup_fd+0x11a/0x2a0
Jan 3 10:08:16 velvet kernel: [<ffffffff88032fea>] copy_process+0xa7a/0xed0
Jan 3 10:08:16 velvet kernel: [<ffffffff88020200>] ? do_page_fault+0x200/0x420
Jan 3 10:08:16 velvet kernel: [<ffffffff880336ae>] do_fork+0x9e/0x330
Jan 3 10:08:16 velvet kernel: [<ffffffff880b2416>] ? vfs_stat+0x16/0x20
Jan 3 10:08:16 velvet kernel: [<ffffffff88024ff7>] sys32_clone+0x27/0x30
Jan 3 10:08:16 velvet kernel: [<ffffffff88024dd5>] ia32_ptregs_common+0x25/0x50
Jan 3 10:08:16 velvet kernel: Code: 83 80 00 00 00 48 8b 1c 24 4c 8b 64 24 08 c9 c3 66 66 66 90 55 48 89 e5 48 83 ec 20 48 89 5d e8 4c 89 65 f0 48 89 fb 4c 89 6d f8 <48> 8b 47 30 49 89 f4 48 85 c0 74 52 48 8b 47 20 48 85 c0 74 44
Jan 3 10:08:16 velvet kernel: RIP [<ffffffff880ac8c7>] filp_close+0x17/0x90
Jan 3 10:08:16 velvet kernel: RSP <ffff88003d8efb48>
Jan 3 10:08:16 velvet kernel: ---[ end trace 03ec19f5357bbc3e ]---
Jan 3 10:08:16 velvet kernel: Fixing recursive fault but reboot is needed!
Message 3 of 17
chirpa
Luminary

Re: Repeating GPF

You need to contact tech support if you want anything to come of this report.
Message 4 of 17
pjos
Aspirant

Re: Repeating GPF

Has anyone got any information from tech support on this GPF issue? My ULTRA 4 started recently to crash with GPF mentioned above on 4.2.22.
Message 5 of 17
Andlier
Tutor

Re: Repeating GPF

I've not solved the issue by contacting tech-support. It ended up with me asking the support guy how Netgear actually go about solving rare and intermittent faults in beta software, and I was pointed in direction of the beta firmware part of this forum.
My problem history so far:
No memory errors after a solid week of continuous memory test.
Tried factory reset on both my ultra 6 and pro 2 device, gpf's still happening every couple of weeks.
Tried going back to firmware 4.2.20 and 4.2.21 on ultra 6 device, no difference. I'm now running 4.2.20 on ultra 6 and 4.2.22 on the pro 2 device.
Tech support indicated that I had a couple of drives in the ultra 6 that was not on the "approved hardware"-list, but my ultra 6 did work fine for two years prior to the problems, so I don't believe that to be the problem. I haven't done any changes to the hardware.
Then techsupport mentioned that I could've transferred the problem through a factory reset by restoring my old configuration. I'm not very keen on doing another factory reset and having to configure everything from scratch. Nor am I keen on removing the disk drives that are not on the supported-list just because they are not on some list. The hardware did work fine for several years, even on same firmware version I'm running now. 4.2.20. The support case was closed without any real answers, That was my second support case on this issue. I've given up tech support for now...

Now I'm trying to change one setting at a time to see if it makes any difference... I've started by disabling any volume maintenance tasks. So far the pro 2 device has not experienced any problems for a month after I did this, maybe it is related. But the ultra 6 i still locking up every other week or so, even with any volume maintenance turned off. Next thing is to turn off the weekly backup jobs to see if that makes any difference...

Any input on how to debug this further/faster would be much appreciated!
Message 6 of 17
mdgm-ntgr
NETGEAR Employee Retired

Re: Repeating GPF

Can you try r the "disk test" boot option (if you haven't already)?: http://www.readynas.com/kb/faq/boot/how_do_i_use_the_boot_menu
Message 7 of 17
Andlier
Tutor

Re: Repeating GPF

Already tried that months ago on the ultra 6 device, tried again tonight, no errors found. (not sure how it would display errors, it just ends up booting normally and no errors reported in the raidar utility or in the webinterface).

I turned on weekly scrubbing and file system check again a couple of days ago, and now it got a gpf and hung again this morning, couple of hours after the scrubbing finished without errors. Still not sure if it is related, but problems seem to happen more often with scrubbing /filesystem check turned on. The pro 2 hasn't had any problems since those jobs were turned off.

Another thing I remember now is that the problems started a month or so after I did a bios-upgrade. I upgraded from 05/26/2010 FLAME6-2 V1.1 to 06/10/2010 FLAME6-2 V1.1 to fix the wake-on-lan issue ( http://www.readynas.com/forum/viewtopic.php?p=268809#p268809 ) If this is the cause for the problems, it would explain why the factory reset didn't fix the problem. I'm willing to try downgrading the bios version and do another factory reset, as I would rather have a stable system without wol, than unstable with wol...
Message 8 of 17
pjos
Aspirant

Re: Repeating GPF

Andlier wrote:
I've not solved the issue by contacting tech-support.


Hmm, not good 😞 ,

I have two Ultra 4 boxes, one of them with two for the moment supported disks ST2000DM001, the GPF problem started suddenly after one months of flawless operation. The other Ultra4 with four disks (those have been removed from HCL), has benn running for 6 moths without any problems.

Is there way to check BIOS version?
Message 9 of 17
mdgm-ntgr
NETGEAR Employee Retired

Re: Repeating GPF

This sounds more like a firmware or hardware issue than a BIOS issue.

If you download your logs (Status > Logs > Download all logs) the bios version is in bios_ver.log

The firmware update history is in initrd.log
Message 10 of 17
it4geeks
Aspirant

Re: Repeating GPF

This happens to me also, here in Panama.

I have Readynas Pro 4, latest firmware: 4.2.22

Im getting this errors in kernel.log

Feb 20 20:53:31 IBSLAB kernel: general protection fault: 0000 [#1] SMP
Feb 20 20:53:31 IBSLAB kernel: last sysfs file: /sys/devices/platform/it87.2576/fan1_input
Feb 20 20:53:31 IBSLAB kernel: CPU 2
Feb 20 20:53:31 IBSLAB kernel: Modules linked in: pvgpio nv6vpd(P)
Feb 20 20:53:31 IBSLAB kernel:
Feb 20 20:53:31 IBSLAB kernel: Pid: 9303, comm: empty_exim Tainted: P 2.6.37.6.RNx86_64.2.4 #1 NETGEAR ReadyNAS/ReadyNAS
Feb 20 20:53:31 IBSLAB kernel: RIP: 0010:[<ffffffff880c5535>] [<ffffffff880c5535>] dup_fd+0x135/0x2a0
Feb 20 20:53:31 IBSLAB kernel: RSP: 0000:ffff8800309a1d98 EFLAGS: 00010206
Feb 20 20:53:31 IBSLAB kernel: RAX: ffff880030663ee0 RBX: ffff8800309db010 RCX: 0000000000000000
Feb 20 20:53:31 IBSLAB kernel: RDX: 0000000000000000 RSI: 1300ffffffffffff RDI: 00000000000000ff
Feb 20 20:53:31 IBSLAB kernel: RBP: ffff8800309a1df8 R08: ffff8800393d0808 R09: 0000000000000001
Feb 20 20:53:31 IBSLAB kernel: R10: 0000000000000001 R11: ffff88003d877780 R12: ffff880030663880
Feb 20 20:53:31 IBSLAB kernel: R13: 0000000000000100 R14: ffff88003027be80 R15: 0000000000000020
Feb 20 20:53:31 IBSLAB kernel: FS: 0000000000000000(0000) GS:ffff88003f300000(0063) knlGS:00000000f76236b0
Feb 20 20:53:31 IBSLAB kernel: CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
Feb 20 20:53:31 IBSLAB kernel: CR2: 00000000080f9100 CR3: 0000000038c84000 CR4: 00000000000006e0
Feb 20 20:53:31 IBSLAB kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Feb 20 20:53:31 IBSLAB kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Feb 20 20:53:31 IBSLAB kernel: Process empty_exim (pid: 9303, threadinfo ffff8800309a0000, task ffff88003017c440)
Feb 20 20:53:31 IBSLAB kernel: Stack:
Feb 20 20:53:31 IBSLAB kernel: 0000000001200011 ffff8800309a1e5c ffff88003d877700 ffff88003081d8c0
Feb 20 20:53:31 IBSLAB kernel: ffff88003081d8c0 ffff8800393d0800 ffff88003d877780 0000000000000000
Feb 20 20:53:31 IBSLAB kernel: ffff88003017c440 0000000001200011 ffff88003080c9f0 ffff88003080c9f0
Feb 20 20:53:31 IBSLAB kernel: Call Trace:
Feb 20 20:53:31 IBSLAB kernel: [<ffffffff88032fea>] copy_process+0xa7a/0xed0
Feb 20 20:53:31 IBSLAB kernel: [<ffffffff88020200>] ? do_page_fault+0x200/0x420
Feb 20 20:53:31 IBSLAB kernel: [<ffffffff880336ae>] do_fork+0x9e/0x330
Feb 20 20:53:31 IBSLAB kernel: [<ffffffff880b2416>] ? vfs_stat+0x16/0x20
Feb 20 20:53:31 IBSLAB kernel: [<ffffffff88024ff7>] sys32_clone+0x27/0x30
Feb 20 20:53:31 IBSLAB kernel: [<ffffffff88024dd5>] ia32_ptregs_common+0x25/0x50
Feb 20 20:53:31 IBSLAB kernel: Code: 8b 7c 24 10 49 8b 76 10 4c 89 fa e8 46 cf 1f 00 45 85 ed 0f 84 a1 00 00 00 4c 8b 45 c8 44 89 ef 45 31 c9 41 ba 01 00 00 00 eb 13 <f0> 48 ff 46 30 49 89 30 49 ff c1 49 83 c0 08 ff cf 74 6b 48 8b
Feb 20 20:53:31 IBSLAB kernel: RIP [<ffffffff880c5535>] dup_fd+0x135/0x2a0
Feb 20 20:53:31 IBSLAB kernel: RSP <ffff8800309a1d98>
Feb 20 20:53:31 IBSLAB kernel: ---[ end trace 45161de5ceb45432 ]---
Feb 20 20:53:34 IBSLAB kernel: general protection fault: 0000 [#2] SMP
Feb 20 20:53:34 IBSLAB kernel: last sysfs file: /sys/devices/platform/it87.2576/fan1_input
Feb 20 20:53:34 IBSLAB kernel: CPU 0
Feb 20 20:53:34 IBSLAB kernel: Modules linked in: pvgpio nv6vpd(P)
Feb 20 20:53:34 IBSLAB kernel:
Feb 20 20:53:34 IBSLAB kernel: Pid: 9303, comm: empty_exim Tainted: P D 2.6.37.6.RNx86_64.2.4 #1 NETGEAR ReadyNAS/ReadyNAS
Feb 20 20:53:34 IBSLAB kernel: RIP: 0010:[<ffffffff880ac8c7>] [<ffffffff880ac8c7>] filp_close+0x17/0x90
Feb 20 20:53:34 IBSLAB kernel: RSP: 0000:ffff8800309a1b48 EFLAGS: 00010286
Feb 20 20:53:34 IBSLAB kernel: RAX: ffff8800309db008 RBX: 1300ffffffffffff RCX: 0000000000000000
Feb 20 20:53:34 IBSLAB kernel: RDX: 0000000000000000 RSI: ffff88003d877700 RDI: 1300ffffffffffff
Feb 20 20:53:34 IBSLAB kernel: RBP: ffff8800309a1b68 R08: ffff8800309a0000 R09: 0000000000000001
Feb 20 20:53:34 IBSLAB kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffff88003d877700
Feb 20 20:53:34 IBSLAB kernel: R13: 0000000000000000 R14: ffff88003027be80 R15: 0000000000000008
Feb 20 20:53:34 IBSLAB kernel: FS: 0000000000000000(0000) GS:ffff88003f200000(0000) knlGS:0000000000000000
Feb 20 20:53:34 IBSLAB kernel: CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
Feb 20 20:53:34 IBSLAB kernel: CR2: 00000000f757f000 CR3: 000000003024a000 CR4: 00000000000006f0
Feb 20 20:53:34 IBSLAB kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Feb 20 20:53:34 IBSLAB kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Feb 20 20:53:34 IBSLAB kernel: Process empty_exim (pid: 9303, threadinfo ffff8800309a0000, task ffff88003017c440)
Feb 20 20:53:34 IBSLAB kernel: Stack:
Feb 20 20:53:34 IBSLAB kernel: ffff88003d2b2a00 0000000000000003 ffff88003d877700 0000000000000000
Feb 20 20:53:34 IBSLAB kernel: ffff8800309a1ba8 ffffffff88036843 0000000000000000 ffff88003017c440
Feb 20 20:53:34 IBSLAB kernel: ffff88003d877700 ffff88003017c440 00000000000000e2 ffff88003d8c0f00
Feb 20 20:53:34 IBSLAB kernel: Call Trace:
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff88036843>] put_files_struct+0xc3/0xd0
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff88036895>] exit_files+0x45/0x50
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff88037a20>] do_exit+0x190/0x770
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff88002bce>] ? apic_timer_interrupt+0xe/0x20
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff88035fd0>] ? kmsg_dump+0x110/0x160
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff88006506>] oops_end+0xa6/0xb0
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff88006606>] die+0x56/0x90
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff880040f2>] do_general_protection+0x152/0x160
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff885b48ef>] general_protection+0x1f/0x30
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff880c5535>] ? dup_fd+0x135/0x2a0
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff880c551a>] ? dup_fd+0x11a/0x2a0
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff88032fea>] copy_process+0xa7a/0xed0
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff88020200>] ? do_page_fault+0x200/0x420
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff880336ae>] do_fork+0x9e/0x330
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff880b2416>] ? vfs_stat+0x16/0x20
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff88024ff7>] sys32_clone+0x27/0x30
Feb 20 20:53:34 IBSLAB kernel: [<ffffffff88024dd5>] ia32_ptregs_common+0x25/0x50
Feb 20 20:53:34 IBSLAB kernel: Code: 83 80 00 00 00 48 8b 1c 24 4c 8b 64 24 08 c9 c3 66 66 66 90 55 48 89 e5 48 83 ec 20 48 89 5d e8 4c 89 65 f0 48 89 fb 4c 89 6d f8 <48> 8b 47 30 49 89 f4 48 85 c0 74 52 48 8b 47 20 48 85 c0 74 44
Feb 20 20:53:34 IBSLAB kernel: RIP [<ffffffff880ac8c7>] filp_close+0x17/0x90
Feb 20 20:53:34 IBSLAB kernel: RSP <ffff8800309a1b48>
Feb 20 20:53:34 IBSLAB kernel: ---[ end trace 45161de5ceb45433 ]---
Feb 20 20:53:34 IBSLAB kernel: Fixing recursive fault but reboot is needed!
Message 11 of 17
Andlier
Tutor

Re: Repeating GPF

Hi again,

After almost 4 weeks of problem free operation after I did a full factory reset and I did not restore any previous settings, the general protection fault issue happened again. I almost believed the problem had disappeared after that last factory reset, but apparently not. I am now running 4.2.20, since that was the last firmware revision I successfully used before these problems started about half a year ago.

Any input on how to debug this issue further would be greatly appreciated!? Are there any tools/software I could run on the readynas that would detect and log if any process violates any memory boundaries for example? Any ideas on how to speed up the time between faults happen? It is tedious to have to wait 4 weeks between each time the problems occur!

Since I see the same issues with approximately the same interval on both my readynas 6 ultra and 2 pro device, with several different firmware revisions (4.2.20, 4.2.21 and 4.2.22) and factory reset did not help in neither of them I must say I suspect firmware issues, maybe in combination with my network environment/configuration... Is there any logs or procedures that could help sort out issues like that? One thing I can think of that is a bit out of the ordinary with my setup is that the ultra 6 is connected to two different LAN subnets on each of the network ports, could that be an issue? Any other similar things to look out for?
Message 12 of 17
chirpa
Luminary

Re: Repeating GPF

Support/Engineering needs to step up and help diagnose it.
Message 13 of 17
Andlier
Tutor

Re: Repeating GPF

chirpa wrote:
Support/Engineering needs to step up and help diagnose it.

I've already made a new support ticket, but that will be the third support ticket I'm submitting regarding this exact problem, the previous support sessions didn't come up with anything useful except trying factory reset... Lets hope they can find something this time.
Message 14 of 17
JanneK
Aspirant

Re: Repeating GPF

I have the same problem.
My Readynas Ultra 2 Plus is connected to a Netgear WNDR3700 and it happens about once per one/two week(s).
Message 15 of 17
Andlier
Tutor

Re: Repeating GPF

Just a quick update from my side on this, problem is not resolved yet, but there is some progress with Netgear support. After first describing my network topology to netgear and allowing netgear tech support ssh access to my ultra 6 device, they quickly decided to RMA my box. Even though the RAM test and disk test is fine and I've tried factory defaulting without success. Well, RMA took some time as usual, they still haven't picked up my old device (I opted for the advance exchange rma procedure, not free). Now the new box they sent me has been running for 3-4 weeks, and them boom, same thing happened again, well almost same thing, there is no "general protection fault" message in the log, but it did crash badly with lots of errors and call trace in the log. So I'm in contact with support again and they again want ssh access to my box. In the mean time I'm following this thread: https://www.readynas.com/forum/viewtopic.php?t=69606&p=386216 which seems to be related. I do have 100mbps switch connected to my box ( in addition to a 1gbps on the other port). But it seems a bit strange that having 100mbps switch should cause such problems... Well, still waiting a resolve for these issues, been almost 8 months since this issue first appeared for me. At least I feel Netgear is taking it seriously now. Also a bit comforting to see others on here with exactly the same symptoms, then it is even more likely that Netgear looks at this seriously.
Message 16 of 17
chopsywa
Aspirant

Re: Repeating GPF

We have encountered this error on a couple of client machines. It has been driving us crazy. Both machines (Pro 4 and Pro 6) have been replaced.

This is not a hardware fault as such.

In both instances the units have one Ethernet directly connected to a server on jumbo frames for VMWare. The other is plugged into the switch for management.

I believe it is something to do with a packet on the network causing the low level network drivers to crash the box.

I have found that when the problems happen I notice some other things.

1. Either just before, or soon after the GPF, there are many errors in the log as such.
Dec 29 23:28:55 PERTH-STORE01 kernel: UDP: short packet: From 192.168.60.9:17500 130/99 to 192.168.60.255:17500
The packets are always UDP to broadcast addresses. At this site there is also an NVX and there are no errors like this in the logs.
2. The management interface is non responsive, even to pings, yet the NFS interface is still alive.
3. Sometime from a couple of hours to a day or two later, it hard locks up and needs a powercyle.

We have one of these units now unplugged from the management LAN and only accessable via the connected server. So far so good. I will report back in a week or so. It usually fails within 3 or 4 days.
Message 17 of 17
Top Contributors
Discussion stats
  • 16 replies
  • 3760 views
  • 0 kudos
  • 9 in conversation
Announcements