From owner-freebsd-performance@FreeBSD.ORG Tue May 10 15:18:49 2005 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 8B22D16A4CE for ; Tue, 10 May 2005 15:18:49 +0000 (GMT) Received: from gate.bitblocks.com (bitblocks.com [209.204.185.216]) by mx1.FreeBSD.org (Postfix) with ESMTP id 4C3A743D7B for ; Tue, 10 May 2005 15:18:49 +0000 (GMT) (envelope-from bakul@bitblocks.com) Received: from bitblocks.com (localhost [127.0.0.1]) by gate.bitblocks.com (8.13.3/8.13.1) with ESMTP id j4AFImSv071163 for ; Tue, 10 May 2005 08:18:48 -0700 (PDT) (envelope-from bakul@bitblocks.com) Message-Id: <200505101518.j4AFImSv071163@gate.bitblocks.com> To: performance@freebsd.org In-reply-to: Your message of "Tue, 10 May 2005 07:51:49 MDT." <4280BC75.7040408@samsco.org> Date: Tue, 10 May 2005 08:18:48 -0700 From: Bakul Shah Subject: Regression testing (was Re: Performance issue) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 May 2005 15:18:49 -0000 This thread makes me wonder if there is value in runing performance tests on a regular basis. This would give an early warning of any peformance loss and can be a useful forensic tool (one can pinpoint when some performance curve changed discontinuously even though at the time of change it may be too small to be noticed). Over a period of time one can gain a view of how the performance evolves. This would not be a single metric but a set of low and high level measures: such as syscall overhead, interrupt overhead, specific h/w devices, disk and fs performance for various filesystems and file sizes, networking data and pkt throughput, routing performance, VM, other subsystems, effect of SMP, various threading libraries, scaling with number of users/programs/cpus/memory, typical applications under normal and stressed loads, compile time for the system and kernel etc. etc. etc. The setup would allow for easy addition of new benchmarks (the only way anything like this can be bootstrapped). Of course, one would need to record disk/processor/memory speed and capacities + kernel config options, system build tools and their options to interpret the results as best as possible. For the results to be useful the setup has to remain as stable as possible for a long time. [While I am dreaming...] A follow on project would be to create visualization tools -- mainly graphing and comparing graphs. It would be neat if one can click on a performance graph to zoom in or see commits made during some selected period. Such a detailed look, combined with profiling can help people focus on specific hotspots & feel good about any improvements they are making. This can be a great way to rope in new people;-)