Quantcast
Channel: Intel Communities : Popular Discussions - Wired Ethernet
Viewing all 842 articles
Browse latest View live

Issue with "Detected Tx Unit Hang" dropping network connections

$
0
0

Hello,

 

We are having an issue with our NICs getting a TX Unit Hang and the adaptor not resetting correctly.  The below error messages are displayed to the console at a vigorous rate and all networking stops. Connecting via IPMI I've found that "service network restart" doesn't resolve the issue. I've found the following steps do work: service network stop; rmmod ixgbe; modprobe ixgbe; service network start. Then everything goes back to normal for some random number of hours (or in some cases days) until it happens again.  If anyone has any insight or history with this issue I'd love any input. Also I'd be happy to provide more details where needed.

 

Thanks,

Matthew

 

The details:

kernel: 2.6.32-358.6.2.el6

Intel diver versions tested: 3.9.15-k (CentOS stock), 3.17.3 (latest version)

Adaptor: Ethernet controller: Intel Corporation 82599EB 10-Gigabit Network Connection (rev 01)

               Subsystem: Intel Corporation Ethernet Server Adapter X520-2

 

The error messages from /var/log/messages (and dmesg):

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: Detected Tx Unit Hang

[kern.err] [kernel: .]: Tx Queue             <2>

[kern.err] [kernel: .]: TDH, TDT             <0>, <1a>

[kern.err] [kernel: .]: next_to_use          <1a>

[kern.err] [kernel: .]: next_to_clean        <0>

[kern.err] [kernel: .]: tx_buffer_info[next_to_clean]

[kern.err] [kernel: .]: time_stamp           <101fd8552>

[kern.err] [kernel: .]: jiffies              <101fd8d43>

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: tx hang 301 detected on queue 2, resetting adapter

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: Reset adapter

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 0 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 1 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 2 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 3 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 4 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 5 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 6 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 7 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 8 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 9 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 10 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 11 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: master disable timed out

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 0 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 1 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 2 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 3 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 4 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 5 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 6 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 7 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 8 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 9 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 10 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 11 not cleared within the polling period

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: detected SFP+: 4

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: Reset adapter

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 0 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 1 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 2 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 3 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 4 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 5 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 6 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 7 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 8 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 9 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 10 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 11 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: master disable timed out

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 0 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 1 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 2 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 3 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 4 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 5 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 6 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 7 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 8 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 9 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 10 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 11 not cleared within the polling period

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: detected SFP+: 4

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: NIC Link is Up 10 Gbps, Flow Control: RX/TX

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: Detected Tx Unit Hang

[kern.err] [kernel: .]: Tx Queue             <2>

[kern.err] [kernel: .]: TDH, TDT             <0>, <2>

[kern.err] [kernel: .]: next_to_use          <2>

[kern.err] [kernel: .]: next_to_clean        <0>

[kern.err] [kernel: .]: tx_buffer_info[next_to_clean]

[kern.err] [kernel: .]: time_stamp           <101fd91c6>

[kern.err] [kernel: .]: jiffies              <101fd9257>

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: tx hang 303 detected on queue 2, resetting adapter


No Intel(r) Adapters are present in this computer

$
0
0

Windows 8.1 Enterprise 64 bit edition

ASRock QC 5000 ITX Mainboard

Intel Desktop CT GBe PCIe adapter

 

Using Proset release 21

 

The OS will automatically detect and bring the card up. What I'm needing in addition is the ANS suite so I can team adapters in a LAG.

 

The OS installation is brand new as of 4 hours ago to a 240GB SSD.

 

intelnic.png

82567LM keeps showing disconnect in event log

$
0
0

I just purchased 4 Dell Optiplex 960 computers with WIndows 7 x64.  I am having this issue on all 4 machines.  Intermittently the computer disconnects from the network and also at times it will show that it is a 10mb connection.  I updated the drivers to 11.5.10 dated 12/10/2009.  The error in the event log is source: e1kexpress Event ID 27 "Network link has been disconnected."  The workstations all connect to a Dell switch.

 

I spoke to Dell and they have no clue. 

 

Help is appreciated.    

VLAN creation on Windows 10 Enterprise TP

$
0
0

Hello, there.

 

This morning I upgraded my fully functionnal Windows 8.1 Enterprise installation to Windows 10 Technical Preview. Before that, I downloaded the Intel Network Adapter Driver from this website, version 20.1, for Windows 10 64 bits. After the driver installation, I had the VLANs tab in the network card properties. However, i'm unable to create a VLAN. The network card is automatically disabled then I receive an error message saying this (translated from french):

 

One or more vlans could not be created. Please check the adapter status and try again.


The window freezes and I have to force-close it. 802.1 option is of course enabled in the Advanced options tab. The event viewer always shows the same error when I try to create a VLAN:


Nom de l’application défaillante NCS2Prov.exe, version : 20.1.1021.0, horodatage : 0x554ba6a4

Nom du module défaillant : NcsColib.dll, version : 20.1.1021.0, horodatage : 0x554ba57d

Code d’exception : 0xc0000005

Décalage d’erreur : 0x0000000000264064

ID du processus défaillant : 0x19d4

Heure de début de l’application défaillante : 0x01d0ada33fd50576

Chemin d’accès de l’application défaillante : C:\Program Files\Intel\NCS2\WMIProv\NCS2Prov.exe

Chemin d’accès du module défaillant: C:\WINDOWS\SYSTEM32\NcsColib.dll

ID de rapport : eefb5842-9220-4bad-93d3-774828c5736e

Nom complet du package défaillant :

ID de l’application relative au package défaillant :

 

I already tried to uninstall all the packages and drivers related to the network card. I deleted fantom network cards then cleaned up the registry. I tried to set some compatibility options to the given executable file, with no success. I tried to reinstall the driver with Drivers Signature disabled, tried to disable IPv4/IPv6 from the network card before trying to add a VLAN... I tried everything I found on Google.

 

Could someone help me, please?

intel nic drivers 19.3 huge 6000+ dpc latency spike once a few secs

$
0
0

hi, i would like to report that the new intel nic drivers version 19.3 that just released recently has huge 6000+ dpc latency spike once in a few secs.

 

my specs

 

Intel(R) 82579LM Gigabit Network Connection

 

windows 7 SP1 32bit + lastest windows update

 

i downgrade to intel nic drivers previous version 19.1 and the problem is just gone.

I217-LM network adapter problem with 100MB full duplex

$
0
0

I recently updated my Microsoft Deployment Toolkit repository with the latest cabs to support new Dell laptop.

 

After the site is updated, I started having problem with older Dell Desktop and laptop reimaged. The problem was that the network cable show disconnected after powering up the system. The wire disconnected stayed disconnected until we physicaly disconnect the network cable and reconnect it. The 100MB full duplex setting was correct. Updating to the latest version of the driver fix the connection status problem on those older hardwares.

 

We now have a persistent problem with the latest system having the I217-LM NIC onboard. The disconnected wrire status problem at boot up occured and with the latest driver downloaded from Intel we have a duplex connection problem. All our Dell switches are configured to force the connection to 100MB full duplex. We then force the NIC to 100MB full duplex. When we check the connection status in the Intel advanced tool, it shows 100mb half-duplex.

 

After having tested all the possibilities, the only way we could achieved a stable connection to 100MB full-duplex is by setting both switch port and NIC to auto-negotiation. This is not an option for us because all the PC are daisy chain with an IP phone configured to work at 100MB full-duplex and with other NIC, auto-negotiation are not working well all the time.

 

I tried with older driver version of the driver available for download and the connection status at bootup occured with all the version prior 18.7 and with 18.7 version, it is impossible to force the speed to 100MB full-duplex.

 

Is it possible to have a quick fix for this problem? I am pretty sure it is a compatibility problem with the latest harware NIC, driver and Dell switches.

 

Thanks for your attention.

Intel(R) Ethernet Connection (2) I219-V - Windows Server 2012

$
0
0

On the download page it is clearly presented a download link for Intel(R) Ethernet Connection (2) I219-V...

 

The issue(s) are:

 

1. SSU.exe does not detect the HW too but presented at the device manager as a missing HW & driver

 

2. Installed a fresh MS Server 2012 with equal issue!

 

3. Installed Win 10 Pro and using the installer it shows the net1ic64.inf with 12.15.23.7 version

 

4. on same path are drivers for Win64/ NDIS62...NDIS65 meaning drivers for the WIn7....Win10

 

While using a brand new main-board with B150N chipset

 

Any help is welcome

 

Hp

82567LM Gigabit Network Connection Link has been disconnected

$
0
0

I have this error constantly comming up in the event log

Warning    9/30/2009 3:22:09 PM    e1yexpress    27    None

Intel(R) 82567LM Gigabit Network Connection Link has been disconnected.

 

I have a wired connection and I can still surf the Internet, but sometimes the connection slows.

 

any idea how to fix this?


Issue with "Detected Tx Unit Hang" dropping network connections

$
0
0

Hello,

 

We are having an issue with our NICs getting a TX Unit Hang and the adaptor not resetting correctly.  The below error messages are displayed to the console at a vigorous rate and all networking stops. Connecting via IPMI I've found that "service network restart" doesn't resolve the issue. I've found the following steps do work: service network stop; rmmod ixgbe; modprobe ixgbe; service network start. Then everything goes back to normal for some random number of hours (or in some cases days) until it happens again.  If anyone has any insight or history with this issue I'd love any input. Also I'd be happy to provide more details where needed.

 

Thanks,

Matthew

 

The details:

kernel: 2.6.32-358.6.2.el6

Intel diver versions tested: 3.9.15-k (CentOS stock), 3.17.3 (latest version)

Adaptor: Ethernet controller: Intel Corporation 82599EB 10-Gigabit Network Connection (rev 01)

               Subsystem: Intel Corporation Ethernet Server Adapter X520-2

 

The error messages from /var/log/messages (and dmesg):

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: Detected Tx Unit Hang

[kern.err] [kernel: .]: Tx Queue             <2>

[kern.err] [kernel: .]: TDH, TDT             <0>, <1a>

[kern.err] [kernel: .]: next_to_use          <1a>

[kern.err] [kernel: .]: next_to_clean        <0>

[kern.err] [kernel: .]: tx_buffer_info[next_to_clean]

[kern.err] [kernel: .]: time_stamp           <101fd8552>

[kern.err] [kernel: .]: jiffies              <101fd8d43>

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: tx hang 301 detected on queue 2, resetting adapter

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: Reset adapter

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 0 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 1 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 2 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 3 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 4 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 5 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 6 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 7 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 8 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 9 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 10 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 11 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: master disable timed out

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 0 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 1 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 2 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 3 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 4 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 5 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 6 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 7 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 8 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 9 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 10 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 11 not cleared within the polling period

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: detected SFP+: 4

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: Reset adapter

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 0 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 1 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 2 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 3 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 4 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 5 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 6 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 7 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 8 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 9 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 10 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 11 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: master disable timed out

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 0 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 1 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 2 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 3 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 4 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 5 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 6 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 7 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 8 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 9 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 10 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 11 not cleared within the polling period

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: detected SFP+: 4

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: NIC Link is Up 10 Gbps, Flow Control: RX/TX

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: Detected Tx Unit Hang

[kern.err] [kernel: .]: Tx Queue             <2>

[kern.err] [kernel: .]: TDH, TDT             <0>, <2>

[kern.err] [kernel: .]: next_to_use          <2>

[kern.err] [kernel: .]: next_to_clean        <0>

[kern.err] [kernel: .]: tx_buffer_info[next_to_clean]

[kern.err] [kernel: .]: time_stamp           <101fd91c6>

[kern.err] [kernel: .]: jiffies              <101fd9257>

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: tx hang 303 detected on queue 2, resetting adapter

Intel(R) 82579V Gigabit Network device issues

$
0
0

Dear.

 

I have recently bought the new sandy bridge core i5 machine and been trying to install Win SBS 2008, but during the process, it asked me for the driver for the ethernet adaptor. I cannot find any whatever online or the driver CD. Can anyone help me to locate a Intel(R) 82579V Gigabit Network driver for Win SBS 2008 please?

 

Thanks a lot

Larry

Intel I218-V - slow transfers FROM gigabit capable devices

$
0
0

I have a desktop (1) PC with a Asus z97 Pro (wifi ac) motherboard and Intel I218-V NIC running Windows 8.1 Pro. I’ve been using it over Wifi, but recently ran a cable to it to get better transfer to/from my gigabit-capable Synology ds213j NAS. The transfers from NAS for some reason averaged 150 KB/s, but transfers to NAS are reasonable 70 MB/s. After troubleshooting I've identified that issue appears only in transfers FROM gigabit capable devices. Would be nice if anyone could help identify the reason and solve it.

 

Just updated to Intel PROSet driver version 20.0.10.0, but that didn't help. The link status was set to auto negotiation and speed was 1 Gbps/Full Duplex. Pinging any of the devices had minimal 1-5 ms latency.

 

Below I summary of troubleshooting with various connected devices to identify the problem and everything seems to point to I218-V adapter. I sent this information to Asus already about 2 months ago, but so far have not gotten back anything more than "yes, it'looks like a driver issue, we are investigating". It's getting annoying, I am considering to return the board and buy another, but I see that most of Z97 based boards are using same NIC. I got this board after returning two Gigabyte GA-Z87X-UD5 TH boards with faulty RAM slots, those boards used Intel I217V adapter and didn't have any transfer speed problems.

 

When I did troubleshooting I used driver 19.0.5.0 (if I remember correctly, basically latest of 19.x). At that point transfers to desktop (1) averaged out at 5 MB/s instead of 150 KB/s as currently. Not sure when and why it dropped, but it happened before I updated drivers, I haven't had any notable changes in hardware/infrastructure, might be caused somehow by to Windows Update (it didn't install network drivers, but did install Intel Management Engine Interface update). I also have OSX installed on this machine (I know it's not officially supported) and it still receives data at 5 MB/s from NAS.

 

TROUBLESHOOTING data sent to Asus

 

For comparison I started up my old desktop (2) PC running Windows 8 Pro on Gigabyte x58A-UD3R motherboard with Realtek RTL8111E 1Gbit NIC to see how it behaves. I tested copying files of size 1.5 GB. It transfers from NAS with ~100 MB/s, but transfers to with 40 MB/s (not sure why so low, but that is not an issue at the moment).

 

More interestingly desktop (1) transfers to desktop (2) with ~100 MB/s, but transfer from (2) to (1) is terribly slow ~500KB/s (jumping 0-100). It doesn’t matter if they are connected via switch or directly with 1m CAT5e cable. Also I saw transfers from (1) to (2) sometimes drop or even start at ~ 20 MB/s, even when connected directly.

 

Additionally I tested transfers from my Dell n5110 laptop with 100 Mbit NIC and for it both, uploads and downloads, maxed out at 11 MB/s with all devices.

 

My friend’s laptop with gigabit NIC experienced also had slow 500 KB/s (jumping 0-100) transfer to desktop (1), but in all other cases (transfer from (1) or to/from desktop (2)) it was stable at ~100 MB/s.

 

Summary of devices:

  • Desktop1 - Windows 8.1 Pro on Asus z97 Pro (wifi ac) motherboard with Intel I218-V NIC
  • Desktop2 - Windows 8 Pro on Gigabyte X58A-UD3R motherboard with Realtek
  • NAS - Synology ds213j
  • Laptop - Dell n5110, Windows 8.1, RTL8105E-VB 100 Mbit NIC
  • FriendsLaptop - Dell, Windows 8, Broadcom NetLink Gigabit Ethernet NIC

 

Below is summary of transfer speeds between devices. In most case they were connected via D-Link DGS-105/E gigabit switch. I also tried direct connections to Desktop1, but that didn’t have any positive effect. Desktop1 also has Linux Min 17.1 and OS X 10.10 installed and they had similar issues with transfers. I know they are not officially supported, just pointing out as additional information.

 

Transfer speeds ( arrow indicates direction )

Desktop1 < NAS = 5 MB/s ( sometimes around

Desktop1 < Desktop2 = 500 KB/s (unstable, jumps 0-1000)

Desktop1 < FriendsLaptop = 500 KB/s (unstable, jumps 0-1000)

 

Desktop2 < Desktop1 = 100 MB/s (in some cases just 20 MB/s)

Desktop2 < NAS = 100 MB/s

Desktop2 < FriendsLaptop = 100 MB/s

 

NAS < Desktop1 = 70 MB/s

NAS < Desktop2 = 40 MB/s

NAS < FriendsLaptop = (forgot make a note, but it was high)

Laptop  [All devices] = 11 MB/s

 

FriendsLaptop < Desktop1 = 100 MB/s

FriendsLaptop < Desktop2 = 100 MB/s

FriendsLaptop < NAS = (forgot make a note, but it was high)

 

Transfer speeds with Desktop1 booted into Linux Mint 17.1 and OS X 10.10

Desktop1_Linux < NAS = 700 KB/s

Desktop1_Linux < Desktop2 = 70 KB/s

Desktop1_OSX < NAS = 5 MB/s

Desktop1_OSX < Desktop2 = 100 KB/s

Desktop2 < Desktop1_Linux = 43 MB/s

Desktop2 < Desktop1_OSX = 40 MB/s

NAS < Desktop1_Linux = 40 MB/s

NAS < Desktop1_OSX = 5 MB/s

 

This all seems to suggest an issue with the Intel I218-V NIC on Desktop 1 receiving data from gigabit capable devices. The fact that I had similar issues in Linux and OS X points to either a hardware problem or some generic problem within driver in code that is shared among platforms. Can it be a hardware problem?

 

Here’s the list of things I tried in attempt to solve the issue:

  • update to update to latest drivers installed with Intel PROSet software version 19.5.303.0 (version 12.12.90.19 shown in device manager)
  • downgrade to version 19.1.51.0 listed on Asus web site (version 12.11.96.1 shown in device manager)
  • unplugging all Fast Ethernet devices (TV, SetTopBox, Laptop) from network based on some forum suggestion.
  • updated UEFI firmware to latest version 2012
  • Verified that [Adapter Properties\Link Speed] section shows 1 Gbps\Full Duplex link status.
  • Tried to force to “1 Gbps Full Duplex” mode instead of “Auto Negotiation”
  • Ran Cable and Hardware tests via [Adapter Properties\LinkSpeed\Diagnostics]. I used 3 CAT5e cables, one 15m connected desktop 1 to switch, two 1m cables (one at the time) were used to connect switch to desktop 2 or desktops 1 to desktop 2 directly. As you see in posted test results they are contradicting. First it says no problems detected and then reports bad connection and ridiculous distance to problem of 65535 = 2^16 (suggests a default initial value in a program). However, using the 1m cable without any reported issues to directly connect both desktops still produced same transfer results.
  • Tried to change Gigabit Master Slave Mode in Advanced Adapter settings to “Forced Master Mode” instead of auto detect

 

 

CABLE TEST RESULTS

----------- 1 m cable_1 test

No cable problems detected.

Test details

Polarity : Normal

Local Receiver Status : Passed

Remote Receiver Status : Passed

 

Cable Offline Test

Poor quality cable detected

Possible causes: Faulty cable, faulty connector, or a speed/duplex mismatch. Verify that the speed/duplex setting on the switch/hub is configured for auto-negotiation.

Test details

Cable Quality

The test detected a bad connection.

Distance to problem: 65535 meters.

 

----------- 15 m cable test

No cable problems detected.

Test details

Cable Length : 21 Meters

Polarity : Normal

Local Receiver Status : Passed

Remote Receiver Status : Passed

 

Cable Offline Test

Poor quality cable detected

Possible causes: Faulty cable, faulty connector, or a speed/duplex mismatch. Verify that the speed/duplex setting on the switch/hub is configured for auto-negotiation.

Test details

Cable Quality

The test detected a bad connection.

Distance to problem: 65535 meters.

 

----------- 1 m cable_2 test

No cable problems detected.

Test details

Polarity : Normal

Local Receiver Status : Passed

Remote Receiver Status : Passed

 

Cable Offline Test

Good quality cable detected.

Test details

Cable Quality

82567LM keeps showing disconnect in event log

$
0
0

I just purchased 4 Dell Optiplex 960 computers with WIndows 7 x64.  I am having this issue on all 4 machines.  Intermittently the computer disconnects from the network and also at times it will show that it is a 10mb connection.  I updated the drivers to 11.5.10 dated 12/10/2009.  The error in the event log is source: e1kexpress Event ID 27 "Network link has been disconnected."  The workstations all connect to a Dell switch.

 

I spoke to Dell and they have no clue. 

 

Help is appreciated.    

VLAN creation on Windows 10 Enterprise TP

$
0
0

Hello, there.

 

This morning I upgraded my fully functionnal Windows 8.1 Enterprise installation to Windows 10 Technical Preview. Before that, I downloaded the Intel Network Adapter Driver from this website, version 20.1, for Windows 10 64 bits. After the driver installation, I had the VLANs tab in the network card properties. However, i'm unable to create a VLAN. The network card is automatically disabled then I receive an error message saying this (translated from french):

 

One or more vlans could not be created. Please check the adapter status and try again.


The window freezes and I have to force-close it. 802.1 option is of course enabled in the Advanced options tab. The event viewer always shows the same error when I try to create a VLAN:


Nom de l’application défaillante NCS2Prov.exe, version : 20.1.1021.0, horodatage : 0x554ba6a4

Nom du module défaillant : NcsColib.dll, version : 20.1.1021.0, horodatage : 0x554ba57d

Code d’exception : 0xc0000005

Décalage d’erreur : 0x0000000000264064

ID du processus défaillant : 0x19d4

Heure de début de l’application défaillante : 0x01d0ada33fd50576

Chemin d’accès de l’application défaillante : C:\Program Files\Intel\NCS2\WMIProv\NCS2Prov.exe

Chemin d’accès du module défaillant: C:\WINDOWS\SYSTEM32\NcsColib.dll

ID de rapport : eefb5842-9220-4bad-93d3-774828c5736e

Nom complet du package défaillant :

ID de l’application relative au package défaillant :

 

I already tried to uninstall all the packages and drivers related to the network card. I deleted fantom network cards then cleaned up the registry. I tried to set some compatibility options to the given executable file, with no success. I tried to reinstall the driver with Drivers Signature disabled, tried to disable IPv4/IPv6 from the network card before trying to add a VLAN... I tried everything I found on Google.

 

Could someone help me, please?

Intel I218-V - slow transfers FROM gigabit capable devices

$
0
0

I have a desktop (1) PC with a Asus z97 Pro (wifi ac) motherboard and Intel I218-V NIC running Windows 8.1 Pro. I’ve been using it over Wifi, but recently ran a cable to it to get better transfer to/from my gigabit-capable Synology ds213j NAS. The transfers from NAS for some reason averaged 150 KB/s, but transfers to NAS are reasonable 70 MB/s. After troubleshooting I've identified that issue appears only in transfers FROM gigabit capable devices. Would be nice if anyone could help identify the reason and solve it.

 

Just updated to Intel PROSet driver version 20.0.10.0, but that didn't help. The link status was set to auto negotiation and speed was 1 Gbps/Full Duplex. Pinging any of the devices had minimal 1-5 ms latency.

 

Below I summary of troubleshooting with various connected devices to identify the problem and everything seems to point to I218-V adapter. I sent this information to Asus already about 2 months ago, but so far have not gotten back anything more than "yes, it'looks like a driver issue, we are investigating". It's getting annoying, I am considering to return the board and buy another, but I see that most of Z97 based boards are using same NIC. I got this board after returning two Gigabyte GA-Z87X-UD5 TH boards with faulty RAM slots, those boards used Intel I217V adapter and didn't have any transfer speed problems.

 

When I did troubleshooting I used driver 19.0.5.0 (if I remember correctly, basically latest of 19.x). At that point transfers to desktop (1) averaged out at 5 MB/s instead of 150 KB/s as currently. Not sure when and why it dropped, but it happened before I updated drivers, I haven't had any notable changes in hardware/infrastructure, might be caused somehow by to Windows Update (it didn't install network drivers, but did install Intel Management Engine Interface update). I also have OSX installed on this machine (I know it's not officially supported) and it still receives data at 5 MB/s from NAS.

 

TROUBLESHOOTING data sent to Asus

 

For comparison I started up my old desktop (2) PC running Windows 8 Pro on Gigabyte x58A-UD3R motherboard with Realtek RTL8111E 1Gbit NIC to see how it behaves. I tested copying files of size 1.5 GB. It transfers from NAS with ~100 MB/s, but transfers to with 40 MB/s (not sure why so low, but that is not an issue at the moment).

 

More interestingly desktop (1) transfers to desktop (2) with ~100 MB/s, but transfer from (2) to (1) is terribly slow ~500KB/s (jumping 0-100). It doesn’t matter if they are connected via switch or directly with 1m CAT5e cable. Also I saw transfers from (1) to (2) sometimes drop or even start at ~ 20 MB/s, even when connected directly.

 

Additionally I tested transfers from my Dell n5110 laptop with 100 Mbit NIC and for it both, uploads and downloads, maxed out at 11 MB/s with all devices.

 

My friend’s laptop with gigabit NIC experienced also had slow 500 KB/s (jumping 0-100) transfer to desktop (1), but in all other cases (transfer from (1) or to/from desktop (2)) it was stable at ~100 MB/s.

 

Summary of devices:

  • Desktop1 - Windows 8.1 Pro on Asus z97 Pro (wifi ac) motherboard with Intel I218-V NIC
  • Desktop2 - Windows 8 Pro on Gigabyte X58A-UD3R motherboard with Realtek
  • NAS - Synology ds213j
  • Laptop - Dell n5110, Windows 8.1, RTL8105E-VB 100 Mbit NIC
  • FriendsLaptop - Dell, Windows 8, Broadcom NetLink Gigabit Ethernet NIC

 

Below is summary of transfer speeds between devices. In most case they were connected via D-Link DGS-105/E gigabit switch. I also tried direct connections to Desktop1, but that didn’t have any positive effect. Desktop1 also has Linux Min 17.1 and OS X 10.10 installed and they had similar issues with transfers. I know they are not officially supported, just pointing out as additional information.

 

Transfer speeds ( arrow indicates direction )

Desktop1 < NAS = 5 MB/s ( sometimes around

Desktop1 < Desktop2 = 500 KB/s (unstable, jumps 0-1000)

Desktop1 < FriendsLaptop = 500 KB/s (unstable, jumps 0-1000)

 

Desktop2 < Desktop1 = 100 MB/s (in some cases just 20 MB/s)

Desktop2 < NAS = 100 MB/s

Desktop2 < FriendsLaptop = 100 MB/s

 

NAS < Desktop1 = 70 MB/s

NAS < Desktop2 = 40 MB/s

NAS < FriendsLaptop = (forgot make a note, but it was high)

Laptop  [All devices] = 11 MB/s

 

FriendsLaptop < Desktop1 = 100 MB/s

FriendsLaptop < Desktop2 = 100 MB/s

FriendsLaptop < NAS = (forgot make a note, but it was high)

 

Transfer speeds with Desktop1 booted into Linux Mint 17.1 and OS X 10.10

Desktop1_Linux < NAS = 700 KB/s

Desktop1_Linux < Desktop2 = 70 KB/s

Desktop1_OSX < NAS = 5 MB/s

Desktop1_OSX < Desktop2 = 100 KB/s

Desktop2 < Desktop1_Linux = 43 MB/s

Desktop2 < Desktop1_OSX = 40 MB/s

NAS < Desktop1_Linux = 40 MB/s

NAS < Desktop1_OSX = 5 MB/s

 

This all seems to suggest an issue with the Intel I218-V NIC on Desktop 1 receiving data from gigabit capable devices. The fact that I had similar issues in Linux and OS X points to either a hardware problem or some generic problem within driver in code that is shared among platforms. Can it be a hardware problem?

 

Here’s the list of things I tried in attempt to solve the issue:

  • update to update to latest drivers installed with Intel PROSet software version 19.5.303.0 (version 12.12.90.19 shown in device manager)
  • downgrade to version 19.1.51.0 listed on Asus web site (version 12.11.96.1 shown in device manager)
  • unplugging all Fast Ethernet devices (TV, SetTopBox, Laptop) from network based on some forum suggestion.
  • updated UEFI firmware to latest version 2012
  • Verified that [Adapter Properties\Link Speed] section shows 1 Gbps\Full Duplex link status.
  • Tried to force to “1 Gbps Full Duplex” mode instead of “Auto Negotiation”
  • Ran Cable and Hardware tests via [Adapter Properties\LinkSpeed\Diagnostics]. I used 3 CAT5e cables, one 15m connected desktop 1 to switch, two 1m cables (one at the time) were used to connect switch to desktop 2 or desktops 1 to desktop 2 directly. As you see in posted test results they are contradicting. First it says no problems detected and then reports bad connection and ridiculous distance to problem of 65535 = 2^16 (suggests a default initial value in a program). However, using the 1m cable without any reported issues to directly connect both desktops still produced same transfer results.
  • Tried to change Gigabit Master Slave Mode in Advanced Adapter settings to “Forced Master Mode” instead of auto detect

 

 

CABLE TEST RESULTS

----------- 1 m cable_1 test

No cable problems detected.

Test details

Polarity : Normal

Local Receiver Status : Passed

Remote Receiver Status : Passed

 

Cable Offline Test

Poor quality cable detected

Possible causes: Faulty cable, faulty connector, or a speed/duplex mismatch. Verify that the speed/duplex setting on the switch/hub is configured for auto-negotiation.

Test details

Cable Quality

The test detected a bad connection.

Distance to problem: 65535 meters.

 

----------- 15 m cable test

No cable problems detected.

Test details

Cable Length : 21 Meters

Polarity : Normal

Local Receiver Status : Passed

Remote Receiver Status : Passed

 

Cable Offline Test

Poor quality cable detected

Possible causes: Faulty cable, faulty connector, or a speed/duplex mismatch. Verify that the speed/duplex setting on the switch/hub is configured for auto-negotiation.

Test details

Cable Quality

The test detected a bad connection.

Distance to problem: 65535 meters.

 

----------- 1 m cable_2 test

No cable problems detected.

Test details

Polarity : Normal

Local Receiver Status : Passed

Remote Receiver Status : Passed

 

Cable Offline Test

Good quality cable detected.

Test details

Cable Quality

82567LM keeps showing disconnect in event log

$
0
0

I just purchased 4 Dell Optiplex 960 computers with WIndows 7 x64.  I am having this issue on all 4 machines.  Intermittently the computer disconnects from the network and also at times it will show that it is a 10mb connection.  I updated the drivers to 11.5.10 dated 12/10/2009.  The error in the event log is source: e1kexpress Event ID 27 "Network link has been disconnected."  The workstations all connect to a Dell switch.

 

I spoke to Dell and they have no clue. 

 

Help is appreciated.    


82567LM Gigabit Network Connection Link has been disconnected

$
0
0

I have this error constantly comming up in the event log

Warning    9/30/2009 3:22:09 PM    e1yexpress    27    None

Intel(R) 82567LM Gigabit Network Connection Link has been disconnected.

 

I have a wired connection and I can still surf the Internet, but sometimes the connection slows.

 

any idea how to fix this?

Issue with "Detected Tx Unit Hang" dropping network connections

$
0
0

Hello,

 

We are having an issue with our NICs getting a TX Unit Hang and the adaptor not resetting correctly.  The below error messages are displayed to the console at a vigorous rate and all networking stops. Connecting via IPMI I've found that "service network restart" doesn't resolve the issue. I've found the following steps do work: service network stop; rmmod ixgbe; modprobe ixgbe; service network start. Then everything goes back to normal for some random number of hours (or in some cases days) until it happens again.  If anyone has any insight or history with this issue I'd love any input. Also I'd be happy to provide more details where needed.

 

Thanks,

Matthew

 

The details:

kernel: 2.6.32-358.6.2.el6

Intel diver versions tested: 3.9.15-k (CentOS stock), 3.17.3 (latest version)

Adaptor: Ethernet controller: Intel Corporation 82599EB 10-Gigabit Network Connection (rev 01)

               Subsystem: Intel Corporation Ethernet Server Adapter X520-2

 

The error messages from /var/log/messages (and dmesg):

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: Detected Tx Unit Hang

[kern.err] [kernel: .]: Tx Queue             <2>

[kern.err] [kernel: .]: TDH, TDT             <0>, <1a>

[kern.err] [kernel: .]: next_to_use          <1a>

[kern.err] [kernel: .]: next_to_clean        <0>

[kern.err] [kernel: .]: tx_buffer_info[next_to_clean]

[kern.err] [kernel: .]: time_stamp           <101fd8552>

[kern.err] [kernel: .]: jiffies              <101fd8d43>

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: tx hang 301 detected on queue 2, resetting adapter

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: Reset adapter

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 0 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 1 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 2 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 3 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 4 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 5 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 6 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 7 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 8 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 9 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 10 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 11 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: master disable timed out

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 0 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 1 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 2 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 3 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 4 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 5 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 6 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 7 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 8 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 9 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 10 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 11 not cleared within the polling period

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: detected SFP+: 4

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: Reset adapter

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 0 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 1 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 2 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 3 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 4 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 5 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 6 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 7 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 8 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 9 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 10 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 11 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: master disable timed out

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 0 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 1 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 2 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 3 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 4 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 5 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 6 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 7 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 8 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 9 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 10 not cleared within the polling period

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: RXDCTL.ENABLE on Rx queue 11 not cleared within the polling period

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: detected SFP+: 4

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: NIC Link is Up 10 Gbps, Flow Control: RX/TX

[kern.err] [kernel: .]: ixgbe 0000:08:00.1: eth3: Detected Tx Unit Hang

[kern.err] [kernel: .]: Tx Queue             <2>

[kern.err] [kernel: .]: TDH, TDT             <0>, <2>

[kern.err] [kernel: .]: next_to_use          <2>

[kern.err] [kernel: .]: next_to_clean        <0>

[kern.err] [kernel: .]: tx_buffer_info[next_to_clean]

[kern.err] [kernel: .]: time_stamp           <101fd91c6>

[kern.err] [kernel: .]: jiffies              <101fd9257>

[kern.info] [kernel: .]: ixgbe 0000:08:00.1: eth3: tx hang 303 detected on queue 2, resetting adapter

VLAN creation on Windows 10 Enterprise TP

$
0
0

Hello, there.

 

This morning I upgraded my fully functionnal Windows 8.1 Enterprise installation to Windows 10 Technical Preview. Before that, I downloaded the Intel Network Adapter Driver from this website, version 20.1, for Windows 10 64 bits. After the driver installation, I had the VLANs tab in the network card properties. However, i'm unable to create a VLAN. The network card is automatically disabled then I receive an error message saying this (translated from french):

 

One or more vlans could not be created. Please check the adapter status and try again.


The window freezes and I have to force-close it. 802.1 option is of course enabled in the Advanced options tab. The event viewer always shows the same error when I try to create a VLAN:


Nom de l’application défaillante NCS2Prov.exe, version : 20.1.1021.0, horodatage : 0x554ba6a4

Nom du module défaillant : NcsColib.dll, version : 20.1.1021.0, horodatage : 0x554ba57d

Code d’exception : 0xc0000005

Décalage d’erreur : 0x0000000000264064

ID du processus défaillant : 0x19d4

Heure de début de l’application défaillante : 0x01d0ada33fd50576

Chemin d’accès de l’application défaillante : C:\Program Files\Intel\NCS2\WMIProv\NCS2Prov.exe

Chemin d’accès du module défaillant: C:\WINDOWS\SYSTEM32\NcsColib.dll

ID de rapport : eefb5842-9220-4bad-93d3-774828c5736e

Nom complet du package défaillant :

ID de l’application relative au package défaillant :

 

I already tried to uninstall all the packages and drivers related to the network card. I deleted fantom network cards then cleaned up the registry. I tried to set some compatibility options to the given executable file, with no success. I tried to reinstall the driver with Drivers Signature disabled, tried to disable IPv4/IPv6 from the network card before trying to add a VLAN... I tried everything I found on Google.

 

Could someone help me, please?

X710 Flow director issues on Linux

$
0
0

Hello all,

 

I am not able to setup Flow Director to filter flow type ipv4. It did not seems to have this issue when flow type is specified as tcp.

Its on Linux(4.9.27), freshly download driver off kernel. Below there is output of the driver version, firmware and the ntuple filter  I want to apply on.

No error shown anywhere.

 

Thank you!

 

ethtool -i i40e1

driver: i40e

version: 2.0.23

firmware-version: 5.05 0x80002927 1.1313.0

expansion-rom-version:

bus-info: 0000:05:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

ethtool -k i40e1

Features for i40e1:

rx-checksumming: off

tx-checksumming: off

        tx-checksum-ipv4: off

        tx-checksum-ip-generic: off [fixed]

        tx-checksum-ipv6: off

        tx-checksum-fcoe-crc: off [fixed]

        tx-checksum-sctp: off

scatter-gather: on

        tx-scatter-gather: on

        tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: off

        tx-tcp-segmentation: off

        tx-tcp-ecn-segmentation: off

        tx-tcp-mangleid-segmentation: off

        tx-tcp6-segmentation: off

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: off

generic-receive-offload: off

large-receive-offload: off [fixed]

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: on

receive-hashing: on

highdma: on

rx-vlan-filter: on [fixed]

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: off [fixed]

tx-gre-segmentation: on

tx-gre-csum-segmentation: off [fixed]

tx-ipxip4-segmentation: on

tx-ipxip6-segmentation: off [fixed]

tx-udp_tnl-segmentation: on

tx-udp_tnl-csum-segmentation: off [fixed]

tx-gso-partial: off [fixed]

tx-sctp-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

l2-fwd-offload: off [fixed]

busy-poll: off [fixed]

hw-tc-offload: off [fixed]

 

i40e version:        2.0.23

 

#ethtool -U i40e1 flow-type ip4 action -1 loc 1

82567LM Gigabit Network Connection Link has been disconnected

$
0
0

I have this error constantly comming up in the event log

Warning    9/30/2009 3:22:09 PM    e1yexpress    27    None

Intel(R) 82567LM Gigabit Network Connection Link has been disconnected.

 

I have a wired connection and I can still surf the Internet, but sometimes the connection slows.

 

any idea how to fix this?

Viewing all 842 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>