Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 28 Aug 2015 18:52:06 -0700
From:      Garrett Cooper <yaneurabeya@gmail.com>
To:        "K. Macy" <kmacy@freebsd.org>
Cc:        John Baldwin <jhb@freebsd.org>, Sean Bruno <sbruno@freebsd.org>, "freebsd-arch@freebsd.org" <freebsd-arch@freebsd.org>
Subject:   Re: Network card interrupt handling
Message-ID:  <00E4073A-9AF4-4FAD-8C09-B771C26A8319@gmail.com>
In-Reply-To: <CAHM0Q_N65J9OSaU=znjgJ_gEiu=M-cb9q1hrxskGSvYFhxL_NQ@mail.gmail.com>
References:  <55DDE9B8.4080903@freebsd.org> <24017021.PxBoCiQKDJ@ralph.baldwin.cx> <CAHM0Q_N65J9OSaU=znjgJ_gEiu=M-cb9q1hrxskGSvYFhxL_NQ@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

> On Aug 28, 2015, at 18:25, K. Macy <kmacy@freebsd.org> wrote:
>=20
>> On Aug 28, 2015 12:59 PM, "John Baldwin" <jhb@freebsd.org> wrote:
>>=20
>>> On Wednesday, August 26, 2015 09:30:48 AM Sean Bruno wrote:
>>> We've been diagnosing what appeared to be out of order processing in
>>> the network stack this week only to find out that the network card
>>> driver was shoveling bits to us out of order (em).
>>>=20
>>> This *seems* to be due to a design choice where the driver is allowed
>>> to assert a "soft interrupt" to the h/w device while real interrupts
>>> are disabled.  This allows a fake "em_msix_rx" to be started *while*
>>> "em_handle_que" is running from the taskqueue.  We've isolated and
>>> worked around this by setting our processing_limit in the driver to
>>> -1.  This means that *most* packet processing is now handled in the
>>> MSI-X handler instead of being deferred.  Some periodic interference
>>> is still detectable via em_local_timer() which causes one of these
>>> "fake" interrupt assertions in the normal, card is *not* hung case.
>>>=20
>>> Both functions use identical code for a start.  Both end up down
>>> inside of em_rxeof() to process packets.  Both drop the RX lock prior
>>> to handing the data up the network stack.
>>>=20
>>> This means that the em_handle_que running from the taskqueue will be
>>> preempted.  Dtrace confirms that this allows out of order processing
>>> to occur at times and generates a lot of resets.
>>>=20
>>> The reason I'm bringing this up on -arch and not on -net is that this
>>> is a common design pattern in some of the Ethernet drivers.  We've
>>> done preliminary tests on a patch that moves *all* processing of RX
>>> packets to the rx_task taskqueue, which means that em_handle_que is
>>> now the only path to get packets processed.
>>=20
>> It is only a common pattern in the Intel drivers. :-/  We (collectively)
>> spent quite a while fixing this in ixgbe and igb.  Longer (hopefully more=

>> like medium) term I have an update to the interrupt API I want to push in=

>> that allows drivers to manually schedule interrupt handlers using an
>> 'hwi' API to replace the manual taskqueues.  This also ensures that
>> the handler that dequeues packets is only ever running in an ithread
>> context and never concurrently.
>=20
> Jeff has a generalization of the net_task infrastructure used at Nokia
> called grouptaskq that I've used for iflib. That does essentially what you=

> refer to. I've converted ixl and am currently about to test an ixgbe
> conversion. I anticipate converting mlxen, all Intel drivers as well as th=
e
> remaining drivers with device specific code in netmap. The one catch is
> finding someone who will publicly admit to owning re hardware so that I ca=
n
> buy it from him and test my changes.
>=20
> Cheers.

I have 2 re NICs in my fileserver at home (Asus went cheap on some of their M=
Bs a while back), but the cards shouldn't cost more than $15 + shipping (loo=
k for "Realtek 8169" on Google).

HTH!
-NGie=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?00E4073A-9AF4-4FAD-8C09-B771C26A8319>