NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
amrob2
Oct 27, 2017Apprentice
Keep getting NETWORK CHANGED message since upgrading to 6.9.0
Yesterday I upgraded the firmware on my RN 316 to the latest firmware 6.9.0. Since upgrading I have noticed that I keep getting the words NETWORK CHANGED on the NAS display which will then change to NO IP ADDRESS followed by 192.168.0.22.
This happens when I copy large quantities of data to the NAS. If I am not copying anything, the NAS display stays blank, but the second a large file transfer starts, the display will do as explained above. I also use the NAS as a DLNA server connected to my Samsung tv and whenever the NETWORK CHANGED message appears, the tv loses access to the DLNA server, so the video I am watching on the tv will error and I have to wait for the tv to reconnect to the NAS again.
This is what I have done to ascertain where the problem lies:
The NAS has a static IP address of 192.168.0.22 allocated on ethernet 0, ethernet 1 is set to DHCP
The network router is used as a DHCP server, so it will automatically allocate IP address 192.168.0.22 via the MAC address of ethernet 0 on the NAS (even though the ethernet 0 is set to static with the same IP)
The network router does not allocate an IP address via the MAC address of ethernet 1, so if that port is used then the first available free IP will be allocated.
When the problem first occurred, I disconnected the ethernet cable from the NAS and plugged it back into the same port - just in case the ethernet cable had become loose - this had no effect.
I then disconnected the ethernet cable from ethernet 0 and plugged a different ethernet cable into ethernet 1 which was also connected to a different port on the router. This had no effect and the NETWORK CHANGED message continued to appear on the NAS whenever a large file transfer was initiated.
Doing this proved that it wasn't a problem with Static v DHCP on the network or a faulty ethernet cable / router port as different cables and ports had been used by carrying out this second test and the second test showed that the problem continued when I went from Static to a DHCP connection.
As the problem only started yesterday after installing OS 6.9.0 I have to assume that it is a bug in that OS update that is causing the netowrk connection to play up as I have never had any problems prior to this update.
Is anyone else having similar issues? Is this a known problem?
Thanks for the detailed information. We are now able to duplicate the issue using a Nighthawk S8000 switch. After updating the firmware to the version I mentioned earlier, the problem went away. You can now download 6.9.1-T119 here, and it should resolve the issue. Please confirm if it works for you.
75 Replies
Replies have been turned off for this discussion
- SandsharkSensei - Experienced User
Setting your NAS to a static IP in the same range that your router is giving out adresses will cause that. The router does not know that the NAS has that address, and is also handing it out to your TV.
Using a static IP on the NAS is not really the best way to accomplish having a constant IP. It is better to reserve the address on the router side and let the NAS use DHCP. Doing it that way prevents issues when you upgrade the router if it uses a different address range.
If you cannot reserve an address in your router, then put the static IP well up out from where your router will give out addresses, like 192.168.0.200.
Since you have a NAS with two ethernet ports, you should be able to go in through the one set for DHCP and set the other one up the same. Then go to your router and reserve the IP it already has or some new one. If it's some new one, you'll need to reboot the NAS for it to claim it.
- amrob2Apprentice
I may not have made it very clear in the opening message, but the tv does NOT have the same IP as the NAS. I allocate static IP's to some of my devices and let some be DHCP. Static IP's are on things like servers, bluray players and tv's and printers. This is because I use certain ranges of IP's for specific device types.
No IP is duplicated anywhere on the network.
This problem has only started since upgrading the firmware. No network infrastructure changes have taken place to cause this, nor has there been any configuration changes on the network. The only change has been the firmware on the server - and that is why I assume that there is an issue with OS 6.9.0.
Per the opening post, I ascertained whether there were any problems with the static IP v DHCP IP allocation and the problem occured regardless of how the IP was allocated.
- amrob2Apprentice
Further to Sandshark reply, I changed the NAS to DHCP this morning just to see if that would make any difference.
2 minutes into a large data transfer and again "NETWORK CHANGED" "No IP Address" "192.168.0.22"
There is a DEFINITE problem with OS 6.9.0. It is not my configuration causing this as I have changed my configuration to what you suggested.
Can someone PLEASE flag this up to the support people to investigate. I have a second RN 316 server that is awaiting a firmware upgrade and that is not going to happen until this is resolved.
I should know not to upgrade to .0 firmware updates. I was bitten on the bum recently with iOS upgrades on my watch, iPads and iPhones where none of them work correctly even after three software patches.
- TinyhornsApprentice
do you have shell-access to the box?
I did see link up/down alot in my dmesg.
If you can login to it, can you type 'dmesg -T' in it and see if the timestamps does tell you anything fun?
I have not started troubleshoot it yet, but, it might be a bug in the linux-kernel that is in use, but im not sure yet.
but from what I can see in my own logs, it seem like the OS thinks that the network interface is switching between up and down, that might explain your problems as well as my own.
// T
- TinyhornsApprentice
I also have serious network issues with 6.9.0.
My bonding keeps going up and down, rendering the NAS useless.
Worked like a charm for the last 2 years with different firmwares, but now, not working at all.
I have tried 2 different switches, same result with both.
Switch 1, Cisco3750, LACP configured.
Switch 2, HP 1920-16G LACP configured.
NAS has LACP configured as well.
Something is seriously broken with network implementation in 6.9.0.
See logs below:
[17/10/27 22:55:28 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth1 offline.
[17/10/27 22:55:50 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth1 offline.
[17/10/27 22:57:57 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth0 offline.
[17/10/27 22:58:02 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth1 offline.
[17/10/27 22:59:03 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth0 offline.
[17/10/27 23:00:57 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth1 offline.
[17/10/27 23:01:18 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth0 offline.
[17/10/27 23:02:24 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth1 offline.
[17/10/27 23:05:05 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth1 offline.
[17/10/27 23:05:30 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth0 offline.
[17/10/27 23:05:30 CEST] warning:system:LOGMSG_SENT_ALERT_MESG_FAILED Alert message failed to send.
[17/10/27 23:25:16 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth1 offline.
[17/10/27 23:29:17 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth0 offline.
[17/10/27 23:30:43 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth1 offline.
[17/10/27 23:32:24 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth0 offline.
[17/10/27 23:33:54 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth0 offline.
[17/10/27 23:33:54 CEST] warning:system:LOGMSG_SENT_ALERT_MESG_FAILED Alert message failed to send.
[17/10/27 23:34:48 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth0 offline.
[17/10/27 23:36:31 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth1 offline.
[17/10/27 23:39:17 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth0 offline.
[17/10/27 23:40:03 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth1 offline.
[17/10/27 23:54:30 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth1 offline.
[17/10/27 23:54:56 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth0 offline.
[17/10/27 23:57:22 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth0 offline.
[17/10/27 23:57:27 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth1 offline.
[17/10/27 23:59:07 CEST] warning:system:LOGMSG_BOND_NETWORK_SLAVE_NIC_DOWN Bond interface bond0 has slave interface eth0 offline. - TinyhornsApprentice
After some more testing, it seems like putting heavy load on the NAS amplifies the disconnects.
I started a copy over the network at 18:40ish.
Clearly the "resets" of the interfaces happens more often then.
The switch keeps reporting the same.
[Sat Oct 28 18:10:51 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:10:58 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:24:03 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:24:53 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:25:19 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:25:28 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:25:29 2017] bond0: link status definitely down for interface eth1, disabling it [Sat Oct 28 18:26:35 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:26:38 2017] bond0: link status definitely down for interface eth1, disabling it [Sat Oct 28 18:35:54 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:36:44 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:39:16 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:39:40 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:39:41 2017] bond0: link status definitely down for interface eth1, disabling it [Sat Oct 28 18:40:34 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:40:36 2017] bond0: link status definitely down for interface eth1, disabling it [Sat Oct 28 18:41:19 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:41:20 2017] bond0: link status definitely down for interface eth1, disabling it [Sat Oct 28 18:42:13 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:42:14 2017] bond0: link status definitely down for interface eth1, disabling it [Sat Oct 28 18:43:11 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:43:14 2017] bond0: link status definitely down for interface eth1, disabling it [Sat Oct 28 18:43:16 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:44:12 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:44:16 2017] bond0: link status definitely down for interface eth1, disabling it [Sat Oct 28 18:44:17 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:45:09 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:45:10 2017] bond0: link status definitely down for interface eth1, disabling it [Sat Oct 28 18:46:13 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:46:14 2017] bond0: link status definitely down for interface eth1, disabling it
[Sat Oct 28 18:43:19 2017] bond0: link status definitely up for interface eth0, 1000 Mbps full duplex [Sat Oct 28 18:44:08 2017] e1000e: eth0 NIC Link is Down [Sat Oct 28 18:44:08 2017] e1000e 0000:01:00.0 eth0: speed changed to 0 for port eth0 [Sat Oct 28 18:44:08 2017] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [Sat Oct 28 18:44:12 2017] e1000e: eth0 NIC Link is Down [Sat Oct 28 18:44:12 2017] e1000e 0000:01:00.0 eth0: speed changed to 0 for port eth0 [Sat Oct 28 18:44:12 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:44:14 2017] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [Sat Oct 28 18:44:14 2017] bond0: link status definitely up for interface eth0, 1000 Mbps full duplex [Sat Oct 28 18:44:16 2017] e1000e: eth1 NIC Link is Down [Sat Oct 28 18:44:16 2017] e1000e 0000:04:00.0 eth1: speed changed to 0 for port eth1 [Sat Oct 28 18:44:16 2017] bond0: link status definitely down for interface eth1, disabling it [Sat Oct 28 18:44:16 2017] bond0: first active interface up! [Sat Oct 28 18:44:17 2017] e1000e: eth0 NIC Link is Down [Sat Oct 28 18:44:17 2017] e1000e 0000:01:00.0 eth0: speed changed to 0 for port eth0 [Sat Oct 28 18:44:17 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:44:17 2017] bond0: first active interface up! [Sat Oct 28 18:44:19 2017] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [Sat Oct 28 18:44:19 2017] bond0: link status definitely up for interface eth1, 1000 Mbps full duplex [Sat Oct 28 18:44:19 2017] bond0: first active interface up! [Sat Oct 28 18:44:20 2017] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [Sat Oct 28 18:44:20 2017] bond0: link status definitely up for interface eth0, 1000 Mbps full duplex [Sat Oct 28 18:45:09 2017] e1000e: eth0 NIC Link is Down [Sat Oct 28 18:45:09 2017] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [Sat Oct 28 18:45:09 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:45:10 2017] e1000e: eth1 NIC Link is Down [Sat Oct 28 18:45:10 2017] bond0: link status definitely down for interface eth1, disabling it [Sat Oct 28 18:45:10 2017] bond0: first active interface up! [Sat Oct 28 18:45:12 2017] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [Sat Oct 28 18:45:12 2017] bond0: link status definitely up for interface eth0, 1000 Mbps full duplex [Sat Oct 28 18:45:12 2017] bond0: first active interface up! [Sat Oct 28 18:45:13 2017] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [Sat Oct 28 18:45:13 2017] bond0: link status definitely up for interface eth1, 1000 Mbps full duplex [Sat Oct 28 18:46:13 2017] e1000e: eth0 NIC Link is Down [Sat Oct 28 18:46:13 2017] e1000e 0000:01:00.0 eth0: speed changed to 0 for port eth0 [Sat Oct 28 18:46:13 2017] bond0: link status definitely down for interface eth0, disabling it [Sat Oct 28 18:46:13 2017] bond0: first active interface up! [Sat Oct 28 18:46:14 2017] e1000e: eth1 NIC Link is Down [Sat Oct 28 18:46:14 2017] e1000e 0000:04:00.0 eth1: speed changed to 0 for port eth1 [Sat Oct 28 18:46:14 2017] bond0: link status definitely down for interface eth1, disabling it [Sat Oct 28 18:46:14 2017] bond0: first active interface up! [Sat Oct 28 18:46:16 2017] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [Sat Oct 28 18:46:16 2017] bond0: link status definitely up for interface eth0, 1000 Mbps full duplex [Sat Oct 28 18:46:16 2017] bond0: first active interface up! [Sat Oct 28 18:46:17 2017] e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [Sat Oct 28 18:46:17 2017] bond0: link status definitely up for interface eth1, 1000 Mbps full duplex
- amrob2Apprentice
Yes it is the same with me Tinyhorns small file copies of 100mb or so are fine, but when you enter into the gb and up file size, that is when the disconnects start and the display starts doing the "No IP Address" followed by the allocated IP address.
It is likely something really small and silly that has been changed and very quick to fix, but as it seems Netgear themselves don't monitor this forum, it could likely take several days for a fix to come through.
- TinyhornsApprentice
Im sure they do not check it on weekends, but, hopefully, maybe during next week :)
- amrob2Apprentice
Tinyhorns let's hope so! I cannot keep going with my NAS like this. I have seen on other threads where people are having problems with backups that they have reverted back to 6.8.1 against Netgear's advice and as I have around 10TB of data on my NAS I cannot risk bricking the device and potentially losing all my data.
I can't even run a backup from this NAS to my second NAS (which is used to backup my primary NAS) owing to the network dropping errors.
When you contacted Netgear and they opened the ticket, did they give any indications as to whether they agree with you and I that there is a bug in 6.9.0?
- mdgm-ntgrNETGEAR Employee Retired
We are monitoring this community (that's what I spend most of my time doing) and we have seen a higher than normal level of reports of networking issues for users running 6.9.0. We are investigating.
It is early Monday morning in the U.S. and I have relayed my concerns to the engineering team.Feel free to send your logs in (see the Sending Logs link in my sig).
- TinyhornsApprentice
Good new, the engineers at Netgear has confirmed the bug, and working on a fix that will be available in 6.9.1
Lets just hope that the patch will be out soon.
// T
- bugsplatTutor
Well, my backup failed because of this bug ... so for now, I've shut down the RN314 :smileyfrustrated:
- amrob2Apprentice
Oh that is good news Tinyhorns. The sooner a fix is out the better as I have not been able to even attempt a backup of my server since upgrading to 6.9.0 and I would be devastated if something happened to a hard drive on it between now and the fix being released.
It does make me wonder how much testing was carried out prior to releasing 6.9.0 though. Considering the network connection resets/drops (whatever!) as soon as you put a read/write load of as little as 1GB on it, this should have easily been picked up in the development stage and most definitely in the UAT stage.
- StephenBGuru - Experienced User
amrob2 wrote:
Considering the network connection resets/drops (whatever!) as soon as you put a read/write load of as little as 1GB on it, this should have easily been picked up in the development stage and most definitely in the UAT stage.
FWIW, this is not showing up on my two RN52x systems. So it might not be that easy to detect (even though it is clearly impacting some users).
- vikingfmoInitiate
Hi,
Please don't forget the RN214 (ARM processor). Since I upgraded to 6.9.0, I have the message:
Fri Nov 3 2017 13:35:37 System: Bond interface bond0 has slave interface eth0 offline
I tried to use Static IP or DHCP, same issue.
Logs for eth0:
root@viking-nas:~# dmesg -T | grep "eth0" [Fri Nov 3 13:34:44 2017] al_eth 0000:00:01.0 eth0: AnnapurnaLabs unified 1Gbe/10Gbe found at mem fe000000, mac addr a0:63:91:9c:1b:fa [Fri Nov 3 13:34:47 2017] al_eth 0000:00:01.0 eth0: using MSI-X per Queue interrupt mode [Fri Nov 3 13:34:47 2017] al_eth 0000:00:01.0 eth0: phy[4]: device 8:04, driver Atheros 8035 ethernet [Fri Nov 3 13:34:47 2017] al_eth 0000:00:01.0 eth0: phy[4]:supported 2ef adv 2ef [Fri Nov 3 13:34:47 2017] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [Fri Nov 3 13:34:51 2017] al_eth 0000:00:01.0 eth0: Link is Up - 1Gbps/Full - flow control rx/tx [Fri Nov 3 13:34:51 2017] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [Fri Nov 3 13:35:05 2017] al_eth 0000:00:01.0 eth0: al_eth_down [Fri Nov 3 13:35:05 2017] bond0: Adding slave eth0 [Fri Nov 3 13:35:05 2017] al_eth 0000:00:01.0 eth0: using MSI-X per Queue interrupt mode [Fri Nov 3 13:35:05 2017] al_eth 0000:00:01.0 eth0: phy[4]: device 8:04, driver Atheros 8035 ethernet [Fri Nov 3 13:35:05 2017] al_eth 0000:00:01.0 eth0: phy[4]:supported 2ef adv 2ef [Fri Nov 3 13:35:05 2017] bond0: Enslaving eth0 as a backup interface with a down link [Fri Nov 3 13:35:09 2017] al_eth 0000:00:01.0 eth0: Link is Up - 1Gbps/Full - flow control rx/tx [Fri Nov 3 13:35:09 2017] bond0: link status definitely up for interface eth0, 1000 Mbps full duplex [Fri Nov 3 13:35:19 2017] NETDEV WATCHDOG: eth0 (al_eth): transmit queue 3 timed out [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: al_eth_reset_task restarting interface [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: al_eth_down [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0x0 [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0x1 [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0x2 [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0x3 [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0x4 [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0x5 [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0x6 [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0x7 [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0x8 [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0x9 [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0xa [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0xb [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0xc [Fri Nov 3 13:35:19 2017] al_eth 0000:00:01.0 eth0: free uncompleted tx skb qid 3 idx 0xd [Fri Nov 3 13:35:20 2017] al_eth 0000:00:01.0 eth0: using MSI-X per Queue interrupt mode [Fri Nov 3 13:35:20 2017] bond0: link status definitely down for interface eth0, disabling it
Logs for eth1:
root@viking-nas:~# dmesg -T | grep "eth1" [Fri Nov 3 13:34:44 2017] al_eth 0000:00:03.0 eth1: AnnapurnaLabs unified 1Gbe/10Gbe found at mem fe020000, mac addr a0:63:91:9c:1b:fb [Fri Nov 3 13:34:47 2017] al_eth 0000:00:03.0 eth1: using MSI-X per Queue interrupt mode [Fri Nov 3 13:34:47 2017] al_eth 0000:00:03.0 eth1: phy[5]: device 18:05, driver Atheros 8035 ethernet [Fri Nov 3 13:34:47 2017] al_eth 0000:00:03.0 eth1: phy[5]:supported 2ef adv 2ef [Fri Nov 3 13:34:47 2017] IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready [Fri Nov 3 13:34:51 2017] al_eth 0000:00:03.0 eth1: Link is Up - 1Gbps/Full - flow control rx/tx [Fri Nov 3 13:34:51 2017] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready [Fri Nov 3 13:35:05 2017] al_eth 0000:00:03.0 eth1: al_eth_down [Fri Nov 3 13:35:05 2017] bond0: Adding slave eth1 [Fri Nov 3 13:35:05 2017] al_eth 0000:00:03.0 eth1: using MSI-X per Queue interrupt mode [Fri Nov 3 13:35:05 2017] al_eth 0000:00:03.0 eth1: phy[5]: device 18:05, driver Atheros 8035 ethernet [Fri Nov 3 13:35:05 2017] al_eth 0000:00:03.0 eth1: phy[5]:supported 2ef adv 2ef [Fri Nov 3 13:35:05 2017] bond0: Enslaving eth1 as a backup interface with a down link [Fri Nov 3 13:35:09 2017] al_eth 0000:00:03.0 eth1: Link is Up - 1Gbps/Full - flow control rx/tx [Fri Nov 3 13:35:09 2017] bond0: link status definitely up for interface eth1, 1000 Mbps full duplex
Firmware version:
root@viking-nas:~# dpkg -l |grep readynasos ii readynasos 6.9.0+4 armel ReadyNASOS base system
Is someone had the same issue with a RN214 ? (and solved it ?)
- SkywalkerNETGEAR Expert
vikingfmo, your issue would be completely different from everything else in this thread, as it has nothing to do with Intel's driver. From your logs, your eth0 interface was not working from the beginning of that boot. Does unplugging and replugging the cable fix it? How about rebooting? Do you see any kernel messages about mdio reads failing?
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!