From owner-freebsd-performance@FreeBSD.ORG Thu Dec 10 13:56:13 2009 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 75DF81065670 for ; Thu, 10 Dec 2009 13:56:13 +0000 (UTC) (envelope-from wmoran@collaborativefusion.com) Received: from mx00.pub.collaborativefusion.com (mx00.pub.collaborativefusion.com [206.210.89.199]) by mx1.freebsd.org (Postfix) with ESMTP id 4F5648FC18 for ; Thu, 10 Dec 2009 13:56:12 +0000 (UTC) Received: from localhost (overdrive.ws.pitbpa0.priv.collaborativefusion.com [192.168.2.162]) (SSL: TLSv1/SSLv3,256bits,AES256-SHA) by wingspan with esmtp; Thu, 10 Dec 2009 08:56:12 -0500 id 00056426.000000004B20FDFC.0000048E Date: Thu, 10 Dec 2009 08:56:12 -0500 From: Bill Moran To: "Noisex" Message-Id: <20091210085612.098f8aae.wmoran@collaborativefusion.com> In-Reply-To: <056c01ca773a$a88f69e0$f9ae3da0$@lv> References: <4B108A18.207@truschinski.de> <584ec6bb0911291330o11fba282y400e0abf121f5e7f@mail.gmail.com> <056c01ca773a$a88f69e0$f9ae3da0$@lv> Organization: Collaborative Fusion Inc. X-Mailer: Sylpheed 2.7.1 (GTK+ 2.16.6; i386-portbld-freebsd7.2) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: freebsd-performance@freebsd.org Subject: Re: FreeBSD TCP tuning and performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Dec 2009 13:56:13 -0000 In response to "Noisex" : > Hi! I have a problem with TCP performance on FBSD boxes with 1Gbps net i-faces (Broadcom NetXtreme II BCM5708 1000Base-T (B2)). Currently i use FBSD 7.1 AMD64. > > The test lab: 2 x (server-client) HP Proliant G5 DL360 (quad-core/8gb ram, raid 5 SAS). > > For net benchmark i used nuttcp and iperf. > > The servers (client-server) are in 1 VLAN. If this is on a switch shared with other (busy) systems, you might be measuring the saturation/capacity of the switch (even if you have those two units on a dedicated vlan). Try the test with a crossover cable to eliminate that possibility. > The results on 1Gbps (down & up): > > 63.4375 MB / 1.00 sec = 532.1332 Mbps > 64.3750 MB / 1.00 sec = 540.0426 Mbps > 62.8125 MB / 1.00 sec = 526.8963 Mbps > 64.5625 MB / 1.00 sec = 541.6318 Mbps > 63.9375 MB / 1.00 sec = 536.3595 Mbps > 63.7500 MB / 1.00 sec = 534.7566 Mbps > 63.0000 MB / 1.00 sec = 528.5003 Mbps > 63.5000 MB / 1.00 sec = 532.7150 Mbps > 64.0000 MB / 1.00 sec = 536.8586 Mbps > 63.5625 MB / 1.00 sec = 533.2452 Mbps > > 637.6688 MB / 10.02 sec = 533.9108 Mbps 9 %TX 9 %RX 9 host-retrans 0.67 msRTT > > 25.5625 MB / 1.00 sec = 214.3916 Mbps > 30.8750 MB / 1.00 sec = 259.0001 Mbps > 29.9375 MB / 1.00 sec = 251.1347 Mbps > 27.1875 MB / 1.00 sec = 228.0669 Mbps > 30.5000 MB / 1.00 sec = 255.8533 Mbps > 30.2500 MB / 1.00 sec = 253.7551 Mbps > 26.8125 MB / 1.00 sec = 224.9211 Mbps > 30.3750 MB / 1.00 sec = 254.8047 Mbps > 30.3750 MB / 1.00 sec = 254.8050 Mbps > 30.0625 MB / 1.00 sec = 252.1835 Mbps > > 292.2155 MB / 10.02 sec = 244.6825 Mbps 10 %TX 12 %RX 0 host-retrans 0.71 msRTT > > As you can see down is littlebit more than half of full link speed. And upload is only 20-25% of full link. I'm not familiar with that program, but can you increase the test sample size? 65M isn't a lot of data to push over a 1gps link for testing purposes, and you might be seeing startup overhead. > I tried to change a lot sysctl params but without a big results. Currenlty my entries in /etc/sysctl.conf which regarding to TCP: > > #kernel tuning, tcp > kern.ipc.somaxconn=2048 > kern.ipc.nmbclusters=32768 > > kern.ipc.maxsockbuf=8388608 > net.inet.tcp.sendbuf_max=16777216 > net.inet.tcp.recvbuf_max=16777216 > net.inet.tcp.inflight.enable=0 > net.inet.tcp.sendspace=65536 > net.inet.tcp.recvspace=65536 > net.inet.udp.recvspace=65536 > net.inet.tcp.inflight.enable=0 > net.inet.tcp.rfc1323=1 > net.inet.tcp.sack.enable=1 > net.inet.tcp.path_mtu_discovery=1 > net.inet.tcp.sendbuf_auto=1 > net.inet.tcp.sendbuf_inc=16384 > net.inet.tcp.recvbuf_auto=1 > net.inet.tcp.recvbuf_inc=524288 > > Do you have some kind suggestion what i could to change to increase the performance of TCP? > > Besides when i make the benchamrks i run the sniffer to see whats happening with network..sometimes i saw that window size is 0...does it mean that server can't handle something or recieve buffer size is to small? If the window size drops to 0, it means the receive buffer on the receiving system is full and waiting to be flushed by the application. Considering the fact that you're sending 65M per second, a 16M buffer might not be large enough. -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/