From owner-freebsd-current@FreeBSD.ORG Thu Nov 19 09:11:20 2009 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0429B106566C for ; Thu, 19 Nov 2009 09:11:20 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id D20AC8FC12 for ; Thu, 19 Nov 2009 09:11:19 +0000 (UTC) Received: from fledge.watson.org (fledge.watson.org [65.122.17.41]) by cyrus.watson.org (Postfix) with ESMTPS id 6285446B1A; Thu, 19 Nov 2009 04:11:19 -0500 (EST) Date: Thu, 19 Nov 2009 09:11:19 +0000 (GMT) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: Elliot Finley In-Reply-To: <54e63c320911181807m4ddb770br1281d1163ae3cf5f@mail.gmail.com> Message-ID: References: <54e63c320911181807m4ddb770br1281d1163ae3cf5f@mail.gmail.com> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-current@freebsd.org Subject: Re: 8.0-RC3 network performance regression X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Nov 2009 09:11:20 -0000 On Wed, 18 Nov 2009, Elliot Finley wrote: > I have several boxes running 8.0-RC3 with pretty dismal network performance. > I also have some 7.2 boxes with great performance. Using iperf I did some > tests: > > server(8.0) <- client (8.0) == 420Mbps > server(7.2) <- client (7.2) == 950Mbps > server(7.2) <- client (8.0) == 920Mbps > server(8.0) <- client (7.2) == 420Mbps > > so when the server is 7.2, I have good performance regardless of whether the > client is 8.0 or 7.2. when the server is 8.0, I have poor performance > regardless of whether the client is 8.0 or 7.2. > > Has anyone else noticed this? Am I missing something simple? I've generally not measured regressions along these lines, but TCP performance can be quite sensitive to specific driver version and hardware configuration. So far, I've generally measured significant TCP scalability improvements in 8, and moderate raw TCP performance improvements over real interfaces. On the other hand, I've seen decreased TCP performance on the loopback due to scheduling interactions with ULE on some systems (but not all -- disabling checksum generate/verify has improved loopback on other systems). The first thing to establish is whether other similar benchmarks give the same result, which might us to narrow the issue down a bit. Could you try using netperf+netserver with the TCP_STREAM test and see if that differs using the otherwise identical configuration? Could you compare the ifconfig link configuration of 7.2 and 8.0 to make sure there's not a problem with the driver negotiating, for example, half duplex instead of full duplex? Also confirm that the same blend ot LRO/TSO/checksum offloading/etc is present. Could you do "procstat -at | grep ifname" (where ifname is your interface name) and send that to me? Another thing to keep an eye of is interrupt rates and pin sharing, which are both sensitive to driver change and ACPI changes. It wouldn't hurt to compare vmstat -i rates not just on your network interface, but also on other devices, to make sure there's not new aliasing. With a new USB stack and plenty of other changes, additional driver code running when your NIC interrupt fires would be highly measurable. Finally, two TCP tweaks to try: (1) Try disabling in-flight bandwidth estimation by setting net.inet.tcp.inflight.enable to 0. This often hurts low-latency, high-bandwidth local ethernet links, and is sensitive to many other issues including time-keeping. It may not be the "cause", but it's a useful thing to try. (2) Try setting net.inet.tcp.read_locking to 0, which disables the read-write locking strategy on global TCP locks. This setting, when enabled, significantly impoves TCP scalability when dealing with multiple NICs or input queues, but is one of the non-trivial functional changes in TCP. Robert N M Watson Computer Laboratory University of Cambridge