From owner-freebsd-current@FreeBSD.ORG Sun Feb 25 10:51:33 2007 Return-Path: X-Original-To: current@freebsd.org Delivered-To: freebsd-current@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 24A9816A400; Sun, 25 Feb 2007 10:51:33 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [209.31.154.42]) by mx1.freebsd.org (Postfix) with ESMTP id E9A4913C46B; Sun, 25 Feb 2007 10:51:32 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from fledge.watson.org (fledge.watson.org [209.31.154.41]) by cyrus.watson.org (Postfix) with ESMTP id 21C89478BF; Sun, 25 Feb 2007 05:51:32 -0500 (EST) Date: Sun, 25 Feb 2007 10:51:31 +0000 (GMT) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: Kris Kennaway In-Reply-To: <20070225054120.GA47059@xor.obsecurity.org> Message-ID: <20070225104709.S36322@fledge.watson.org> References: <20070224213111.GB41434@xor.obsecurity.org> <346a80220702242100i7ec22b5h4b25cc7d20d03e98@mail.gmail.com> <20070225054120.GA47059@xor.obsecurity.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: smp@freebsd.org, hackers@freebsd.org, current@freebsd.org, cokane@cokane.org Subject: Re: Progress on scaling of FreeBSD on 8 CPU systems X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 25 Feb 2007 10:51:33 -0000 On Sun, 25 Feb 2007, Kris Kennaway wrote: > On Sat, Feb 24, 2007 at 10:00:35PM -0700, Coleman Kane wrote: > >> What does the performance curve look like for the in-CVS 7-CURRENT tree >> with 4BSD or ULE ? How do those stand up against the Linux SMP scheduler >> for scalability. It would be nice to see the comparison displayed to see >> what the performance improvements of the aforementioned patch were realized >> to. This would likely be a nice graphics for the SMPng project page, BTW... > > There are graphs of this on Jeff's blog, referenced in that URL. Fixing > filedesc locking makes a HUGE difference. I think the real message of all this is that our locking strategy is basically pretty reasonable for the paths exercised by this (and quite a few) workloads, but our low-level scheduler and locking primitives need a lot of refinement. The next step here is to look at the impact of these changes (individually and together) with other hardware configurations and other workloads. On the hardware side, I'd very much like to see measurements done on that rather nasty generation of Intel Xeon P4's where the costs of mutexes were astronomically out of proportion with other operation costs, which historically has heavily pessimized ULE due to the additional locking it had (don't know if this still applies). It would be really great if we could find "workload owners" who would maintain easy-to-run benchmark configurations and also run them regularly on a fixed hardware configuration over a long time publishing results and testing patches. Kris has done this for SQL benchmarks to great effect, giving a nice controlled testing environment for a host of performance-related patches, but SQL is not the be-all and end-all of application workloads, so having others do similar things with other benchmarks would be very helpful. Robert N M Watson Computer Laboratory University of Cambridge