From owner-freebsd-performance@FreeBSD.ORG Tue May 10 20:32:34 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 1261616A4CE for ; Tue, 10 May 2005 20:32:34 +0000 (GMT) Received: from gate.bitblocks.com (bitblocks.com [209.204.185.216]) by mx1.FreeBSD.org (Postfix) with ESMTP id B482043D62 for ; Tue, 10 May 2005 20:32:33 +0000 (GMT) (envelope-from bakul@bitblocks.com) Received: from bitblocks.com (localhost [127.0.0.1]) by gate.bitblocks.com (8.13.3/8.13.1) with ESMTP id j4AKWWcD073387; Tue, 10 May 2005 13:32:33 -0700 (PDT) (envelope-from bakul@bitblocks.com) Message-Id: <200505102032.j4AKWWcD073387@gate.bitblocks.com> To: Petri Helenius In-reply-to: Your message of "Tue, 10 May 2005 22:51:46 +0300." <428110D2.8070004@he.iki.fi> Date: Tue, 10 May 2005 13:32:32 -0700 From: Bakul Shah cc: Bakul Shah cc: performance@freebsd.org Subject: Re: Regression testing (was Re: Performance issue) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 May 2005 20:32:34 -0000 > This sounds somewhat similar to Solaris dtrace stuff? Dtrace can be a (very useful) component for collecting performance metrics. What I am talking about is a framework where you'd apply dtrace or other micro/system level performance tests or benchmarks on a regular basis for a variety of machines, loads etc. and collate results in a usable form. The purpose is to provide an ongoing view of how performance of various subsystems and the system as a whole changes for various loads and configurations as the codebase evolves. This gives an early warning of performance loss (as seen in -5.x versus -4.x releases) as well as early confirmation of improvements (as seen in -6.x versus -5.x). Users can provide early feedback witout having to wait for a release. It is difficult and time consuming for developers to measure the impact of their changes across a variety of systems, configurations and loads. A centralized performance measuring system can be very valuable here. If you see that e.g. a new scheduler has a terrible impact on some systems or loads, you'd either come up with something better or provide a knob. If you see that a nifty new feature has a significant performance cost, you'd be less tempted to make it the default (or at least others get a chance to scream early on).