Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 24 Sep 2004 17:51:54 -0400
From:      Paul Mather <paul@gromit.dlib.vt.edu>
To:        Pawel Jakub Dawidek <pjd@freebsd.org>
Cc:        freebsd-geom@freebsd.org
Subject:   Re: gstripe stripe size units?
Message-ID:  <1096062713.9306.119.camel@zappa.Chelsea-Ct.Org>
In-Reply-To: <20040924075553.GE9550@darkness.comp.waw.pl>
References:  <1095993821.5665.124.camel@zappa.Chelsea-Ct.Org> <20040924075553.GE9550@darkness.comp.waw.pl>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 2004-09-24 at 03:55, Pawel Jakub Dawidek wrote:

> The best you can do is to just try it. There is a tool for this, which
> I wrote for tests like this: src/tools/tools/raidtest/.

Thanks for the pointer.  I used that program over various stripe sizes
(doubling each time) on a two-drive geom_stripe.

In my tests, it seemed that for stripes 4096--32768  the performance was
roughly even (+1 req/sec at each doubling).  For 65536--524288 there was
a reasonable increase (+10 req/sec at each doubling).  After that, it
levelled off somewhat until 4 MB to 16 MB it seemed to settle at 131 and
132 req/sec.

> If gstripe is running in "fast" mode (kern.geom.stripe.fast=1), size of
> stripe could be small, because then, it still sends as large I/O requests
> as possible and reorganize the data in memory, but this method consumes
> a lot of memory if you want it to be efficient.

One thing that puzzles me is that no matter how large I made the stripe
size, I never got a kern.geom.stripe.fast_failed > 0.  Here is what I
have after the last run of raidtest with a stripe size of 16 MB:

>>>>>
Read 50000 requests from raidtest.data.
Number of READ requests: 24991.
Number of WRITE requests: 25009.
Number of bytes to transmit: 3288266752.
Number of processes: 10.
Bytes per second: 8745390
Requests per second: 132
<<<<<

Here are the values afterwards for kern.geom.stripe sysctls:
>>>>>
kern.geom.stripe.debug: 0
kern.geom.stripe.fast: 1
kern.geom.stripe.maxmem: 6553600
kern.geom.stripe.fast_failed: 0
<<<<<

I'm puzzled because a stripe of 16 MB will not fit in 6553600 bytes, so
surely fast_failed should be > 0 at stripes of 8 MB or greater in my
tests for the above value of kern.geom.stripe.maxmem?

Also, I don't know what the distribution of request sizes is in
raidtest.data.  The raidtest program operates over the raw device, and
so may not necessarily behave in terms of issuing requests as, say, a
UFS filesystem might.

Any thoughts for tuning stripe size for a UFS filesystem so as not to
have a bad effect on the VM system?

Cheers,

Paul.
-- 
e-mail: paul@gromit.dlib.vt.edu

"Without music to decorate it, time is just a bunch of boring production
 deadlines or dates by which bills must be paid."
        --- Frank Vincent Zappa



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1096062713.9306.119.camel>