From owner-freebsd-current@FreeBSD.ORG Thu Nov 25 18:48:21 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id D20FE16A4CF for ; Thu, 25 Nov 2004 18:48:21 +0000 (GMT) Received: from c00l3r.networx.ch (c00l3r.networx.ch [62.48.2.2]) by mx1.FreeBSD.org (Postfix) with ESMTP id D652343D64 for ; Thu, 25 Nov 2004 18:48:20 +0000 (GMT) (envelope-from andre@freebsd.org) Received: (qmail 73771 invoked from network); 25 Nov 2004 18:40:36 -0000 Received: from dotat.atdotat.at (HELO [62.48.0.47]) ([62.48.0.47]) (envelope-sender ) by c00l3r.networx.ch (qmail-ldap-1.03) with SMTP for ; 25 Nov 2004 18:40:36 -0000 Message-ID: <41A628F3.3000309@freebsd.org> Date: Thu, 25 Nov 2004 19:48:19 +0100 From: Andre Oppermann User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.8a5) Gecko/20041122 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Robert Watson References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit cc: Jeremie Le Hen cc: freebsd-current@freebsd.org cc: freebsd-stable@freebsd.org Subject: Re: serious networking (em) performance (ggate and NFS) problem X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Nov 2004 18:48:21 -0000 Robert Watson wrote: > On Sun, 21 Nov 2004, Sean McNeil wrote: > >>I have to disagree. Packet loss is likely according to some of my >>tests. With the re driver, no change except placing a 100BT setup with >>no packet loss to a gigE setup (both linksys switches) will cause >>serious packet loss at 20Mbps data rates. I have discovered the only >>way to get good performance with no packet loss was to >> >>1) Remove interrupt moderation >>2) defrag each mbuf that comes in to the driver. > > Sounds like you're bumping into a queue limit that is made worse by > interrupting less frequently, resulting in bursts of packets that are > relatively large, rather than a trickle of packets at a higher rate. > Perhaps a limit on the number of outstanding descriptors in the driver or > hardware and/or a limit in the netisr/ifqueue queue depth. You might try > changing the default IFQ_MAXLEN from 50 to 128 to increase the size of the > ifnet and netisr queues. You could also try setting net.isr.enable=1 to > enable direct dispatch, which in the in-bound direction would reduce the > number of context switches and queueing. It sounds like the device driver > has a limit of 256 receive and transmit descriptors, which one supposes is > probably derived from the hardware limit, but I have no documentation on > hand so can't confirm that. > > It would be interesting on the send and receive sides to inspect the > counters for drops at various points in the network stack; i.e., are we > dropping packets at the ifq handoff because we're overfilling the > descriptors in the driver, are packets dropped on the inbound path going > into the netisr due to over-filling before the netisr is scheduled, etc. > And, it's probably interesting to look at stats on filling the socket > buffers for the same reason: if bursts of packets come up the stack, the > socket buffers could well be being over-filled before the user thread can > run. I think it's the tcp_output() path that overflows the transmit side of the card. I take that from the better numbers when he defrags the packets. Once I catch up with my mails I start to put up the code I wrote over the last two weeks. :-) You can call me Mr. TCP now. ;-) -- Andre