From owner-freebsd-stable@FreeBSD.ORG Mon Mar 8 14:53:20 2010 Return-Path: Delivered-To: stable@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 644D7106564A; Mon, 8 Mar 2010 14:53:20 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id 3FDD38FC1A; Mon, 8 Mar 2010 14:53:20 +0000 (UTC) Received: from fledge.watson.org (fledge.watson.org [65.122.17.41]) by cyrus.watson.org (Postfix) with ESMTPS id B9D3646B66; Mon, 8 Mar 2010 09:53:19 -0500 (EST) Date: Mon, 8 Mar 2010 14:53:19 +0000 (GMT) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: current@FreeBSD.org, stable@FreeBSD.org In-Reply-To: Message-ID: References: User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: Subject: Survey results very helpful, thanks! (was: Re: net.inet.tcp.timer_race: does anyone have a non-zero value?) X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Mar 2010 14:53:20 -0000 On Sun, 7 Mar 2010, Robert Watson wrote: > If your system shows a non-zero value, please send me a *private e-mail* > with the output of that command, plus also the output of "sysctl kern.smp", > "uptime", and a brief description of the workload and network interface > configuration. For example: it's a busy 8-core web server with roughly X > connections/second, and that has three em network interfaces used to load > balance from an upstream source. IPSEC is used for management purposes (but > not bulk traffic), and there's a local MySQL database. I've now received a number of reports that confirm our suspicion that the race does occur, albeit very rarely, and particularly on systems with many cores or multiple network interfaces. Fixing it is definitely on the TODO for 9.0, both to improve our ability to do multiple virtual network stacks, but with an appropriately scalable fix in mind given our improved TCP scalability for 9.0 as well. Thanks for all the responses, Robert N M Watson Computer Laboratory University of Cambridge