Date: Thu, 10 Dec 2009 08:56:12 -0500 From: Bill Moran <wmoran@collaborativefusion.com> To: "Noisex" <noisex@apollo.lv> Cc: freebsd-performance@freebsd.org Subject: Re: FreeBSD TCP tuning and performance Message-ID: <20091210085612.098f8aae.wmoran@collaborativefusion.com> In-Reply-To: <056c01ca773a$a88f69e0$f9ae3da0$@lv> References: <4B108A18.207@truschinski.de> <B36EDB3F-79AB-4365-9E22-AA7A9E838393@gmail.com> <584ec6bb0911291330o11fba282y400e0abf121f5e7f@mail.gmail.com> <b8592ed80911300352n6e05be32l1435bb1b27ece071@mail.gmail.com> <056c01ca773a$a88f69e0$f9ae3da0$@lv>
next in thread | previous in thread | raw e-mail | index | archive | help
In response to "Noisex" <noisex@apollo.lv>: > Hi! I have a problem with TCP performance on FBSD boxes with 1Gbps net i-faces (Broadcom NetXtreme II BCM5708 1000Base-T (B2)). Currently i use FBSD 7.1 AMD64. > > The test lab: 2 x (server-client) HP Proliant G5 DL360 (quad-core/8gb ram, raid 5 SAS). > > For net benchmark i used nuttcp and iperf. > > The servers (client-server) are in 1 VLAN. If this is on a switch shared with other (busy) systems, you might be measuring the saturation/capacity of the switch (even if you have those two units on a dedicated vlan). Try the test with a crossover cable to eliminate that possibility. > The results on 1Gbps (down & up): > > 63.4375 MB / 1.00 sec = 532.1332 Mbps > 64.3750 MB / 1.00 sec = 540.0426 Mbps > 62.8125 MB / 1.00 sec = 526.8963 Mbps > 64.5625 MB / 1.00 sec = 541.6318 Mbps > 63.9375 MB / 1.00 sec = 536.3595 Mbps > 63.7500 MB / 1.00 sec = 534.7566 Mbps > 63.0000 MB / 1.00 sec = 528.5003 Mbps > 63.5000 MB / 1.00 sec = 532.7150 Mbps > 64.0000 MB / 1.00 sec = 536.8586 Mbps > 63.5625 MB / 1.00 sec = 533.2452 Mbps > > 637.6688 MB / 10.02 sec = 533.9108 Mbps 9 %TX 9 %RX 9 host-retrans 0.67 msRTT > > 25.5625 MB / 1.00 sec = 214.3916 Mbps > 30.8750 MB / 1.00 sec = 259.0001 Mbps > 29.9375 MB / 1.00 sec = 251.1347 Mbps > 27.1875 MB / 1.00 sec = 228.0669 Mbps > 30.5000 MB / 1.00 sec = 255.8533 Mbps > 30.2500 MB / 1.00 sec = 253.7551 Mbps > 26.8125 MB / 1.00 sec = 224.9211 Mbps > 30.3750 MB / 1.00 sec = 254.8047 Mbps > 30.3750 MB / 1.00 sec = 254.8050 Mbps > 30.0625 MB / 1.00 sec = 252.1835 Mbps > > 292.2155 MB / 10.02 sec = 244.6825 Mbps 10 %TX 12 %RX 0 host-retrans 0.71 msRTT > > As you can see down is littlebit more than half of full link speed. And upload is only 20-25% of full link. I'm not familiar with that program, but can you increase the test sample size? 65M isn't a lot of data to push over a 1gps link for testing purposes, and you might be seeing startup overhead. > I tried to change a lot sysctl params but without a big results. Currenlty my entries in /etc/sysctl.conf which regarding to TCP: > > #kernel tuning, tcp > kern.ipc.somaxconn=2048 > kern.ipc.nmbclusters=32768 > > kern.ipc.maxsockbuf=8388608 > net.inet.tcp.sendbuf_max=16777216 > net.inet.tcp.recvbuf_max=16777216 > net.inet.tcp.inflight.enable=0 > net.inet.tcp.sendspace=65536 > net.inet.tcp.recvspace=65536 > net.inet.udp.recvspace=65536 > net.inet.tcp.inflight.enable=0 > net.inet.tcp.rfc1323=1 > net.inet.tcp.sack.enable=1 > net.inet.tcp.path_mtu_discovery=1 > net.inet.tcp.sendbuf_auto=1 > net.inet.tcp.sendbuf_inc=16384 > net.inet.tcp.recvbuf_auto=1 > net.inet.tcp.recvbuf_inc=524288 > > Do you have some kind suggestion what i could to change to increase the performance of TCP? > > Besides when i make the benchamrks i run the sniffer to see whats happening with network..sometimes i saw that window size is 0...does it mean that server can't handle something or recieve buffer size is to small? If the window size drops to 0, it means the receive buffer on the receiving system is full and waiting to be flushed by the application. Considering the fact that you're sending 65M per second, a 16M buffer might not be large enough. -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20091210085612.098f8aae.wmoran>