Date: Fri, 18 Jun 1999 11:04:00 -0700 From: Darryl Okahata <darrylo@sr.hp.com> To: Greg Lehey <grog@lemis.com> Cc: freebsd-hackers@FreeBSD.ORG Subject: Re: vinum performance Message-ID: <199906181804.LAA12830@mina.sr.hp.com> In-Reply-To: Your message of "Fri, 18 Jun 1999 18:20:06 %2B0930." <19990618182006.C2863@freebie.lemis.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Greg Lehey <grog@lemis.com> wrote:
> On Friday, 18 June 1999 at 1:14:20 -0700, Darryl Okahata wrote:
>
> > Possible marginally-related data point: with the 3.1-RELEASE vinum,
> > and with striped drives (yes, I know the original user is using
> > concatenated devices), I saw pretty bad write performance with the
> > default filesystem frag size. Increasing the frag size (via newfs),
> > increased performance substantially.
>
> That shouldn't have anything to do with it. If you see anything
> unusual in Vinum performance, please tell me.
It shouldn't, perhaps, have anything to do with it, but it did.
I'm simply reporting empirical results, where I kept the stripe size
constant and varied the filesystem frag size. I was able to get around
a 2X improvement in write speed by increasing the frag size. Why, I
don't know. I do know that I saw what I saw. ;-)
This was, however, using 128K stripe sizes. Perhaps there's an
interaction between small stripes and frag sizes?
Also, I'm still stuck using the 3.1-RELEASE vinum. I want to
upgrade to something newer, but I can't do so until I manage to backup
my system (and I've got a lot of files to backup). ;-(
> It's easy to come to
> incorrect conclusions about the cause of performance problems, and
> disseminating them doesn't help. Follow the links at
It's not so much of a conclusion as a data point. I'm simply
reporting what I saw.
Note that I am NOT saying that varying the frag size is the most
significant way of improving performance. I'm sure that you're correct
in your recommendations. However, I was able to significantly affect
write performace simply by changing the frag size. As I've said, I
don't know why, but it happened. I don't know how reproducible this is;
maybe it's related to rotational latencies, the particular drive type,
drive firmware, CPU speed, etc.. I don't know -- but I do know that it
happened, and I'm simply reporting a data point.
This is just a single data point, and we all know how dangerous it
is to extrapolate from a single data point. ;-) However, if others
report their findings, we may or may not find a trend.
--
Darryl Okahata
darrylo@sr.hp.com
DISCLAIMER: this message is the author's personal opinion and does not
constitute the support, opinion, or policy of Hewlett-Packard, or of the
little green men that have been following him all day.
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199906181804.LAA12830>
