From owner-freebsd-hackers@FreeBSD.ORG Fri Apr 11 09:24:19 2003 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 94E6137B401; Fri, 11 Apr 2003 09:24:19 -0700 (PDT) Received: from bluejay.mail.pas.earthlink.net (bluejay.mail.pas.earthlink.net [207.217.120.218]) by mx1.FreeBSD.org (Postfix) with ESMTP id D014943FB1; Fri, 11 Apr 2003 09:24:18 -0700 (PDT) (envelope-from tlambert2@mindspring.com) Received: from pool0023.cvx40-bradley.dialup.earthlink.net ([216.244.42.23] helo=mindspring.com) by bluejay.mail.pas.earthlink.net with asmtp (SSLv3:RC4-MD5:128) (Exim 3.33 #1) id 1941K1-0001YY-00; Fri, 11 Apr 2003 09:24:14 -0700 Message-ID: <3E96EBD7.5CA4C171@mindspring.com> Date: Fri, 11 Apr 2003 09:22:47 -0700 From: Terry Lambert X-Mailer: Mozilla 4.79 [en] (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: bj@dc.luth.se References: <200304111407.h3BE7hKl086838@dc.luth.se> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-ELNK-Trace: b1a02af9316fbb217a47c185c03b154d40683398e744b8a4fcc023af216ec7303b45f987bfdd2aa4350badd9bab72f9c350badd9bab72f9c350badd9bab72f9c cc: freebsd-hackers@freebsd.org cc: freebsd-performance@freebsd.org cc: "Jin Guojun \[DSD\]" cc: Eric Anderson cc: David Gilbert Subject: Re: tcp_output starving -- is due to mbuf get delay? X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Apr 2003 16:24:19 -0000 Borje Josefsson wrote: > I should add that I have tried with MTU 1500 also. Using NetBSD as sender > works fine (just a little bit higher CPU load). When we tried MTU1500 with > FreeBSD as sender, we got even lower performance. > > Somebody else in this thread said that he had got full GE speed between > two FreeBSD boxes connected back-to-back. I don't question that, but that > doesn't prove anything. The problem arises when You are trying to do this > long-distance and have to handle a large mbuf queue. The boxes were not connected "back to back", they were connected through three Gigabit switches and a VLAN trunk. But they were in a lab, yes. I'd be happy to try long distance for you, and even go so far as to fix the problem for you, if you are willing to drop 10GBit fiber to my house. 8-) 8-). As far as a large mbuf queue, one thing that's an obvious difference is SACK support; however, this can not be the problem, since the NetBSD->FreeBSD speed is unafected (supposedly). What is the FreeBSD->NetBSD speed? Some knobs to try on FreeBSD: net.inet.ip.intr_queue_maxlen -> 300 net.inet.ip.check_interface -> 0 net.inet.tcp.rfc1323 -> 0 net.inet.tcp.inflight_enable -> 1 net.inet.tcp.inflight_debug -> 0 net.inet.tcp.delayed_ack -> 0 net.inet.tcp.newreno -> 0 net.inet.tcp.slowstart_flightsize -> 4 net.inet.tcp.msl -> 1000 net.inet.tcp.always_keepalive -> 0 net.inet.tcp.sendspace -> 65536 (on sender) Don't try them all at once and expect magic; you will probably need some combination. Also, try recompiling your kernel *without* IPSEC support. -- Terry