Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 12 Oct 2005 13:33:06 +0100 (BST)
From:      Robert Watson <rwatson@FreeBSD.org>
To:        gnn@freebsd.org
Cc:        performance@freebsd.org, net@freebsd.org
Subject:   Re: Call for performance evaluation: net.isr.direct
Message-ID:  <20051012132648.J7178@fledge.watson.org>
In-Reply-To: <m21x2r9cz6.wl%gnn@neville-neil.com>
References:  <20051005133730.R87201@fledge.watson.org> <20051011145923.B92528@fledge.watson.org> <m21x2r9cz6.wl%gnn@neville-neil.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Wed, 12 Oct 2005 gnn@freebsd.org wrote:

> At Tue, 11 Oct 2005 15:01:11 +0100 (BST),
> rwatson wrote:
>> If I don't hear anything back in the near future, I will commit a
>> change to 7.x to make direct dispatch the default, in order to let a
>> broader community do the testing.  :-) If you are setup to easily
>> test stability and performance relating to direct dispatch, I would
>> appreciate any help.
>
> One thing I would caution, though I have no proof nor have I made any 
> tests (yes, I know, bad gnn), is that I would expect this change to 
> degrade non-network performance when the network is under load.  This 
> kind of change is most likely to help those with purely network loads, 
> i.e. routers, bridges, etc and to hurt anyone else.  Are you absolutely 
> sure we should make this the default?

In theory, as I mentioned in my earlier e-mail, this does result in more 
network processing occurring at a hardware ithread priority.  However, the 
software ithread (swi) priority is already quite high.  Looking closely 
at that is probably called for -- specifically, how will this impact 
scheduling for other hardware (rather than software) ithreads?

The most interesting effect I've seen on non-network applications is that, 
because the network stack now uses significantly less CPU when under high 
load, more CPU is available for other activities.  With the performance of 
network hardware available on server now often exceeding the CPU capacity 
of those servers (as compared to a few years ago when 100mbps cards could 
be trivially saturated by server hardware), the cost of processing packets 
is now back up again, so this can occur with relative ease.  Another 
interesting point is that remote traffic can now no longer result in a 
denial of service of local traffic by virtue of overflowing the netisr 
queue.  Previously, a single queue was shared by all network interfaces 
going to the netisr, and in the direct dispatch model, the queueing now 
happens almost entirely in the device driver and skips entering a queue to 
get to the stack.  This has some other interesting effects, not least that 
older cards with less buffering now see significantly less queue space, 
but I'm not sure if that's significant.

In general, I agree with your point though: we need to evaluate the effect 
of this change on a broad array of real-world workloads.  Hence my e-mail, 
which so far has seen two responses -- a private one from Mike Tancsa 
offering to run testing, and your public one.  So anyone willing to help 
evaluate the performance of this change would be most welcome to.

Robert N M Watson



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20051012132648.J7178>