Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 20 Mar 2014 23:47:44 -0300
From:      Christopher Forgeron <csforgeron@gmail.com>
To:        Rick Macklem <rmacklem@uoguelph.ca>
Cc:        FreeBSD Net <freebsd-net@freebsd.org>
Subject:   Re: 9.2 ixgbe tx queue hang
Message-ID:  <CAB2_NwC3on1xP3UAutkQa-3zu_JhK0%2B-ZjVb6_3NVemw2Or-KQ@mail.gmail.com>
In-Reply-To: <CAB2_NwCGsAHdMFPoST05azb9K_O-K_khk3Bi1sF2om3puCcyCw@mail.gmail.com>
References:  <CAB2_NwB=21H5pcx=Wzz5gV38eRN%2BtfwhY28m2FZhdEi6X3JE7g@mail.gmail.com> <1543350122.637684.1395368002237.JavaMail.root@uoguelph.ca> <CAB2_NwCGsAHdMFPoST05azb9K_O-K_khk3Bi1sF2om3puCcyCw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
BTW - I think this will end up being a TSO issue, not the patch that Jack
applied.

When I boot Jack's patch (MJUM9BYTES removal) this is what netstat -m shows:

21489/2886/24375 mbufs in use (current/cache/total)
4080/626/4706/6127254 mbuf clusters in use (current/cache/total/max)
4080/587 mbuf+clusters out of packet secondary zone in use (current/cache)
16384/50/16434/3063627 4k (page size) jumbo clusters in use
(current/cache/total/max)
0/0/0/907741 9k jumbo clusters in use (current/cache/total/max)
0/0/0/510604 16k jumbo clusters in use (current/cache/total/max)
79068K/2173K/81241K bytes allocated to network (current/cache/total)
18831/545/4542 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
15626/0/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile

Here is an un-patched boot:

21550/7400/28950 mbufs in use (current/cache/total)
4080/3760/7840/6127254 mbuf clusters in use (current/cache/total/max)
4080/2769 mbuf+clusters out of packet secondary zone in use (current/cache)
0/42/42/3063627 4k (page size) jumbo clusters in use
(current/cache/total/max)
16439/129/16568/907741 9k jumbo clusters in use (current/cache/total/max)
0/0/0/510604 16k jumbo clusters in use (current/cache/total/max)
161498K/10699K/172197K bytes allocated to network (current/cache/total)
18345/155/4099 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
3/3723/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile



See how removing the MJUM9BYTES is just pushing the problem from the 9k
jumbo cluster into the 4k jumbo cluster?

Compare this to my FreeBSD 9.2 STABLE machine from ~ Dec 2013 : Exact same
hardware, revisions, zpool size, etc. Just it's running an older FreeBSD.

# uname -a
FreeBSD SAN1.XXXXX 9.2-STABLE FreeBSD 9.2-STABLE #0: Wed Dec 25 15:12:14
AST 2013     aatech@FreeBSD-Update Server:/usr/obj/usr/src/sys/GENERIC
amd64

root@SAN1:/san1 # uptime
 7:44AM  up 58 days, 38 mins, 4 users, load averages: 0.42, 0.80, 0.91

root@SAN1:/san1 # netstat -m
37930/15755/53685 mbufs in use (current/cache/total)
4080/10996/15076/524288 mbuf clusters in use (current/cache/total/max)
4080/5775 mbuf+clusters out of packet secondary zone in use (current/cache)
0/692/692/262144 4k (page size) jumbo clusters in use
(current/cache/total/max)
32773/4257/37030/96000 9k jumbo clusters in use (current/cache/total/max)
0/0/0/508538 16k jumbo clusters in use (current/cache/total/max)
312599K/67011K/379611K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines

Lastly, please note this link:

http://lists.freebsd.org/pipermail/freebsd-net/2012-October/033660.html

It's so old that I assume the TSO leak that he speaks of has been patched,
but perhaps not. More things to look into tomorrow.


On Thu, Mar 20, 2014 at 11:32 PM, Christopher Forgeron <csforgeron@gmail.com
> wrote:

> Yes, there is something broken in TSO for sure, as disabling it allows me
> to run without error. It is possible that the drop in performance is
> allowing me to stay under a critical threshold for the problem, but I'd
> feel happier testing to make sure.
>
> I understand what you're asking for in the patch, I'll make the edits
> tomorrow and recompile a test kernel and see.
>
> Right now I'm running tests on the ixgbe that Jack sent. Even if his patch
> fixes the issue, I wonder if something else isn't broken in TSO, as the
> ixgbe code has had these lines for a long time, and it's only on this 10.0
> build that I have issues.
>
> I'll be following up tomorrow with info on either outcome.
>
> Thanks for your help.. your rusty networking is still better than mine. :-)
>
>
> On Thu, Mar 20, 2014 at 11:13 PM, Rick Macklem <rmacklem@uoguelph.ca>wrote:
>
>> Christopher Forgeron wrote:
>> >
>> > Output from the patch you gave me (I have screens of it.. let me know
>> > what you're hoping to see.
>> >
>> >
>> > Mar 20 16:37:22 SAN0 kernel: after mbcnt=33 pklen=65538 actl=65538
>> > Mar 20 16:37:22 SAN0 kernel: before pklen=65538 actl=65538
>> Hmm. I think this means that the loop that generates TSO segments in
>> tcp_output() is broken, since I'm pretty sure that the maximum size
>> should be is IP_MAXPACKET (65535).
>>
>> Either that or some non-TCP socket is trying to send a packet that
>> exceeds IP_MAXPACKET for some reason.
>>
>> Would it be possible to add a printf() for m->m_pkthdr.csum_flags
>> to the before case, in the "if" that generates the before printf?
>> I didn't think to put this in, but CSUM_TSO will be set if it
>> is a TSO segment, I think? My networking is very rusty.
>> (If how to add this isn't obvious, just email and I'll update
>>  the patch.)
>>
>> Thanks for doing this, rick
>>
>>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAB2_NwC3on1xP3UAutkQa-3zu_JhK0%2B-ZjVb6_3NVemw2Or-KQ>