Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 18 Nov 2010 05:54:14 +1100 (EST)
From:      Bruce Evans <brde@optusnet.com.au>
To:        Maxim Sobolev <sobomax@sippysoft.com>
Cc:        svn-src-stable-7@FreeBSD.org, svn-src-stable@FreeBSD.org, svn-src-all@FreeBSD.org, src-committers@FreeBSD.org, Bruce Evans <brde@optusnet.com.au>
Subject:   Re: svn commit: r215368 - in stable/7/sys: arm/at91 arm/xscale/ixp425 contrib/dev/oltr dev/ae dev/an dev/ar dev/arl dev/ath dev/awi dev/ce dev/cm dev/cnw dev/cp dev/cs dev/ctau dev/cx dev/cxgb dev/ed d...
Message-ID:  <20101118053830.K1202@besplex.bde.org>
In-Reply-To: <4CE3DFB3.4060809@sippysoft.com>
References:  <201011160440.oAG4e3YU039413@svn.freebsd.org> <20101117030118.X1203@besplex.bde.org> <4CE3DFB3.4060809@sippysoft.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, 17 Nov 2010, Maxim Sobolev wrote:

> On 11/16/2010 8:12 AM, Bruce Evans wrote:
>> This was quite low for yestdeay's uses (starting in about 1995), but today
>> it is little missed since only yesterday's low-end hardware uses it.  Most
>> of today's interfaces are 1Gbps, and for this it is almost essential for
>> the hardware to have a ring buffer with > 50 entries, so most of today's
>> drivers ignore ifqmaxlen and set the queue length to the almost equally
>> bogus value of the ring buffer size (-1).  I set it to about 10000 instead
>> in bge and em (10000 is too large, but fixes streaming under certain loads
>> when hz is small).
>
> One of those interfaces is if_rl, which is still quite popular these days and 
> supports speeds up to 1gbps (which I believe triggered this change). But in

It is the only one on the list that I used.  Maybe it should be handled
specially.  Just bump up its queue lengths to maybe 128 for 100 Mbps and
512 for 1 Gbps in all cases, or tune this depending on the amount of memory?

> general I agree, unfortunately FreeBSD network subsystem is tuned for 
> yesteday's speeds. We are seeing lot of lookups and other issues under high 
> PPS. I wish somebody could stand and pick up the task of cleaning it up and 
> re-tuning eventually for 2010. We could probably even sponsor in part such a 
> work (anyone).

I haven't seen any lockups, but just the maximum pps on fixed hardware
decreasing with every increase in the FreeBSD version number (about 30%
since FreeBSD-5).  My hardware CPU and bus are saturated by low-end em
1 Gbps and medium-end bge 1 Gbps, so bloat in the stack translates into
lower pps.  I tuned bge a lot to make it fast under the version of
FreeBSD-5 that I usually run, but barely touched upper layers.

> Apart from interface tuning for Gbps speeds, another area that needs more 
> work is splitting up memory pool for the IPC from the memory pool for the 
> other networking. Today's software is highly distributed and rock-solid IPC 
> is a must for the FreeBSD being a solid server application platform. That's 
> OK when under the load we drop some packets, but it's not OK when extreme 
> network activity can bring down communications between application and 
> database system within the host itself. And that's exactly what can happen in 
> FreeBSD.

Does flow control help here?  I think it should prevent most dropped packets,
but be actively harmful if it stops the flow when IPC packets are queued
behind non-IPC ones.  Large queue lengths are also bad for latency.

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20101118053830.K1202>