Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 22 Nov 2004 11:34:13 +0000 (GMT)
From:      Robert Watson <rwatson@freebsd.org>
To:        Sean McNeil <sean@mcneil.com>
Cc:        Jeremie Le Hen <jeremie@le-hen.org>
Subject:   Re: Re[4]: serious networking (em) performance (ggate and NFS) problem
Message-ID:  <Pine.NEB.3.96L.1041122112718.19086S-100000@fledge.watson.org>
In-Reply-To: <1101100870.16086.16.camel@server.mcneil.com>

next in thread | previous in thread | raw e-mail | index | archive | help

On Sun, 21 Nov 2004, Sean McNeil wrote:

> I have to disagree.  Packet loss is likely according to some of my
> tests.  With the re driver, no change except placing a 100BT setup with
> no packet loss to a gigE setup (both linksys switches) will cause
> serious packet loss at 20Mbps data rates.  I have discovered the only
> way to get good performance with no packet loss was to
> 
> 1) Remove interrupt moderation
> 2) defrag each mbuf that comes in to the driver.

Sounds like you're bumping into a queue limit that is made worse by
interrupting less frequently, resulting in bursts of packets that are
relatively large, rather than a trickle of packets at a higher rate.
Perhaps a limit on the number of outstanding descriptors in the driver or
hardware and/or a limit in the netisr/ifqueue queue depth.  You might try
changing the default IFQ_MAXLEN from 50 to 128 to increase the size of the
ifnet and netisr queues.  You could also try setting net.isr.enable=1 to
enable direct dispatch, which in the in-bound direction would reduce the
number of context switches and queueing.  It sounds like the device driver
has a limit of 256 receive and transmit descriptors, which one supposes is
probably derived from the hardware limit, but I have no documentation on
hand so can't confirm that.

It would be interesting on the send and receive sides to inspect the
counters for drops at various points in the network stack; i.e., are we
dropping packets at the ifq handoff because we're overfilling the
descriptors in the driver, are packets dropped on the inbound path going
into the netisr due to over-filling before the netisr is scheduled, etc. 
And, it's probably interesting to look at stats on filling the socket
buffers for the same reason: if bursts of packets come up the stack, the
socket buffers could well be being over-filled before the user thread can
run.

Robert N M Watson             FreeBSD Core Team, TrustedBSD Projects
robert@fledge.watson.org      Principal Research Scientist, McAfee Research





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.NEB.3.96L.1041122112718.19086S-100000>