From owner-freebsd-geom@FreeBSD.ORG Fri Sep 24 21:52:03 2004 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 817BD16A4CE; Fri, 24 Sep 2004 21:52:03 +0000 (GMT) Received: from gromit.dlib.vt.edu (gromit.dlib.vt.edu [128.173.49.29]) by mx1.FreeBSD.org (Postfix) with ESMTP id 10B8C43D6B; Fri, 24 Sep 2004 21:52:03 +0000 (GMT) (envelope-from paul@gromit.dlib.vt.edu) Received: from hawkwind.Chelsea-Ct.Org (pool-151-199-91-61.roa.east.verizon.net [151.199.91.61]) by gromit.dlib.vt.edu (8.13.1/8.13.1) with ESMTP id i8OLq0Sk016360 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 24 Sep 2004 17:52:02 -0400 (EDT) (envelope-from paul@gromit.dlib.vt.edu) Received: from [192.168.1.25] (zappa [192.168.1.25])i8OLpsR7023030; Fri, 24 Sep 2004 17:51:55 -0400 (EDT) From: Paul Mather To: Pawel Jakub Dawidek In-Reply-To: <20040924075553.GE9550@darkness.comp.waw.pl> References: <1095993821.5665.124.camel@zappa.Chelsea-Ct.Org> <20040924075553.GE9550@darkness.comp.waw.pl> Content-Type: text/plain Message-Id: <1096062713.9306.119.camel@zappa.Chelsea-Ct.Org> Mime-Version: 1.0 X-Mailer: Ximian Evolution 1.4.6 Date: Fri, 24 Sep 2004 17:51:54 -0400 Content-Transfer-Encoding: 7bit cc: freebsd-geom@freebsd.org Subject: Re: gstripe stripe size units? X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Sep 2004 21:52:03 -0000 On Fri, 2004-09-24 at 03:55, Pawel Jakub Dawidek wrote: > The best you can do is to just try it. There is a tool for this, which > I wrote for tests like this: src/tools/tools/raidtest/. Thanks for the pointer. I used that program over various stripe sizes (doubling each time) on a two-drive geom_stripe. In my tests, it seemed that for stripes 4096--32768 the performance was roughly even (+1 req/sec at each doubling). For 65536--524288 there was a reasonable increase (+10 req/sec at each doubling). After that, it levelled off somewhat until 4 MB to 16 MB it seemed to settle at 131 and 132 req/sec. > If gstripe is running in "fast" mode (kern.geom.stripe.fast=1), size of > stripe could be small, because then, it still sends as large I/O requests > as possible and reorganize the data in memory, but this method consumes > a lot of memory if you want it to be efficient. One thing that puzzles me is that no matter how large I made the stripe size, I never got a kern.geom.stripe.fast_failed > 0. Here is what I have after the last run of raidtest with a stripe size of 16 MB: >>>>> Read 50000 requests from raidtest.data. Number of READ requests: 24991. Number of WRITE requests: 25009. Number of bytes to transmit: 3288266752. Number of processes: 10. Bytes per second: 8745390 Requests per second: 132 <<<<< Here are the values afterwards for kern.geom.stripe sysctls: >>>>> kern.geom.stripe.debug: 0 kern.geom.stripe.fast: 1 kern.geom.stripe.maxmem: 6553600 kern.geom.stripe.fast_failed: 0 <<<<< I'm puzzled because a stripe of 16 MB will not fit in 6553600 bytes, so surely fast_failed should be > 0 at stripes of 8 MB or greater in my tests for the above value of kern.geom.stripe.maxmem? Also, I don't know what the distribution of request sizes is in raidtest.data. The raidtest program operates over the raw device, and so may not necessarily behave in terms of issuing requests as, say, a UFS filesystem might. Any thoughts for tuning stripe size for a UFS filesystem so as not to have a bad effect on the VM system? Cheers, Paul. -- e-mail: paul@gromit.dlib.vt.edu "Without music to decorate it, time is just a bunch of boring production deadlines or dates by which bills must be paid." --- Frank Vincent Zappa