Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 28 Jan 2005 01:14:54 +0000 (GMT)
From:      Robert Watson <rwatson@FreeBSD.org>
To:        Nick Pavlica <linicks@gmail.com>
Cc:        Mike Tancsa <mike@sentex.net>
Subject:   Re: FreeBSD 5.3 I/O Performance / Linux 2.6.10 | Continued Discussion
Message-ID:  <Pine.NEB.3.96L.1050128010648.68140B-100000@fledge.watson.org>
In-Reply-To: <dc9ba04405012717045622a60f@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 27 Jan 2005, Nick Pavlica wrote:

> > The move to an MPSAFE VFS will help with that a lot, I should think.
> 
> Do you know if this will find it's way to 5.x in the near future? 

Hopefully not too quickly, it's fairly experimental.  I know there's
interest in getting it into 5.x however.  Perhaps once it's settled for a
few months and we've confirmed that in the "off" state it's quite
un-harmful, it can be merged.

> > Also, while on face value this may seem odd, could you try the following
> > additional variables:
> > 
> > - Layer the test UFS partition directly over ad0 instead of ad0s1a
> > - UFS1 vs UFS2
> 
> I just tested with UFS1 and had almost the exact same results.

OK, thanks.

> > Finally, in as much as is possible, make sure that the layout of the disks
> > is approximately the same -- as countless benchmarking papers show, there
> > are substantial differences (10%+) in I/O throughput depending on where on
> > the disk surface operations occur.  That's one of the reasons to try UFS1
> > for the test partition, although not the only one.
> 
> My tests use the exact same disk layout, and hardware.  However, I have
> had consistent results on all 4 boxes that I have tested on. 
> 
> At this point I'm making the assumption that the poor disk I/O
> performance on 5.3 isn't a file system issue, but is tied to a larger
> issue with the Kernel (I know never make assumptions ... :)).  In all my
> testing, I have noticed that 5.3 doesn't appear to release cpu resources
> even if there isn't any other demand for resources.  I would compare it
> to driveling a car with a governor on it.  When I tested with 4.11, it
> allocated considerably more resources.  I do hope that the 5.x issues
> are resolved soon so that I can deploy may production servers on it
> rather than starting on 4 and them making the big switch.  I will
> probably test 6 for the fun of it. 

Forgive me if this was in previous e-mails and I missed it, but -- how
does I/O directly on /dev/[diskdevice] differ as compared to the file
system I/O?  In particular, it's interesting to compare both large block
I/O (reads, writes at fairly large multiples of the sector size -- 512k is
a good number) and small I/O size (512 bytes is good).  This will help
identify the source along two dimmensions: are we looking at a basic
storage I/O problem that's present even without the file system, or can we
conclude that some of the additional extra cost is in the file system code
or the hand off to it.  Also, with the large and small I/O size, we can
perhaps draw some conclusions about to what extent the source is a
per-transaction overhead.

Finally -- I figure you've done this already, but it's worth asking -- can
you confirm that your hardware is negotiating the same basic parameters
under 5.x and 4.x?  In particular, the ATA code has changed substantially,
so if using ATA hardware you want to confirm that the same DMA mode is
negotiated.

Robert N M Watson



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.NEB.3.96L.1050128010648.68140B-100000>