Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 20 Oct 1999 10:01:46 -0400
From:      "Michael Sinz" <Michael.Sinz@sinz.org>
To:        "Geoff Buckingham" <geoffb@chuggalug.clues.com>, "Gerard Roudier" <groudier@club-internet.fr>
Cc:        "Mike Sinz" <Michael@sinz.org>, "Randell Jesup" <rjesup@wgate.com>, "scsi@FreeBSD.ORG" <scsi@FreeBSD.ORG>
Subject:   Re: FreeBSD 3.2 / Slow SCSI Dell PowerEdge 4300
Message-ID:  <199910201401.KAA25925@vixen.sinz.org>

index | next in thread | raw e-mail

On Sat, 16 Oct 1999 16:00:33 +0200 (MET DST), Gerard Roudier wrote:

>On Sat, 16 Oct 1999, Geoff Buckingham wrote:
>
>> On Fri, Oct 15, 1999 at 07:14:15PM +0000, Randell Jesup wrote:
>> > 	Looking at the bonnie results from 10398:
>> > 
>> > write cache enabled
>> >               -------Sequential Output-------- ---Sequential Input-- --Random--
>> >               -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
>> >            MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
>> > Number of Tags
>> > NO        100  7222 89.2  6801 21.8  2347  8.5  7330 93.3  7368 14.6 226.5  4.7
>> > 2         100  7263 90.3  6357 20.3  2730  9.9  7025 90.5  7321 14.9 209.4  4.6
>> > 3         100  7115 88.1  6406 20.8  2289  8.9  7307 93.9  7335 15.0 212.6  4.5
[...]
>> 
>> Having brough bonnie into this I must offer some words of caution.
>> 
>> Bonnie only has three seekers for the random seek test, which is potentialy
>> very different from a heavily loaded qmail/exim/apache box.

Most server operators who run large FreeBSD (or other) servers also have
multiple spindles.  While single large 50gig drives seem nice (and are
rather fast for a single drive) actual drive performance is well below
the bus bandwidth of quality SCSI or FibreChannel devices.

Tags and deselection are very important and related concepts.  On systems
that really get the disks hit a lot, you spread the areas out over multiple
spindles (each of which may or may not be a RAID) and thus the mail queue,
http server, and pop server may well be on different spindles.  This lets
the I/O of one device complete while another is still seeking.

Command tagging is much the same only it lets the SCSI device do this.
If the SCSI device happens to be a multi-drive storage array, this may
again provide larger benefits than a single drive.

In fact, on multi-drive solutions, overall time may be significantly faster
with tag/deselect operations even though single operations in such
environments would be a bit slower (more overhead)

>Indeed.  As I noted in some other posting, it seems there is some subtle
>side effect when tags are disabled that makes multithreaded IO-streams
>replaced by a succession of single-threaded IO-streams that may last
>seconds. Just thinking to disk IOs sorting + reading ahead at the same
>time let guess the reasons. 
>
>I also suggest to measure _interactivity_. No need to be overall faster if
>a user that download a large file, for example, can take precedence over
>another user that download a small file due to the IO scheduling policy
>performing mostly batching instead of multi-threaded IOs. 

Interactivity is a difficult thing to actually measure.  It is a "feel"
to the system.  Sometimes the "feel" can be very fast and yet the actual
performance be rather poor.

Interactivity is most important when users are directly noticing the
effects.  On servers this is many times less so since there are a number
of things that come between the user and the server.  (The client program
tends to be the most important in providing the interactivity "feel")

In general, one does not want to hold off a process for very long.  Thus
a single large I/O by another process should not cause your process to
have to wait until that I/O completes.  One of the tricks is to never do
very large I/O operations in a single block.  Another is to have the I/O
subsystem multi-threaded.  (There is a limit to this since with a single
disk drive you only have a single head and thus can only do a single
operation at any given instance)  However, breaking up I/O operations into
smaller operations will increase overhead...

>> Most machines nowdays have a lot of RAM so large testfiles need to be used
>> to minimise cacheing effects (I have allways used -s 1024 but this takes
>> some time :-)

This is partially a testing issue but it also brings up the point of
how much I/O performance has an impact on observed system performance.
This depends very much on the working set of data that is being processed.

Data that is the same over and over again (such as simple static web pages
or CGI scripts on a web server) will generally end up in the cache.

It is when you get cache misses that you end up really hitting the I/O
performance limits.

Given the right (or wrong) set of parameters as far as cache size, working
set, and users, you can either make things worse by adding a bit of I/O
overhead in order to improve overall I/O performance.  If it turns out that
the I/O *happens* to behave in a single-threaded way, adding tags will
just add overhead and no benefit.  If the I/O does not work out single
threaded and the I/O subsystem does not multi-thread you can get into
lower responsiveness and even lower performance.

IMHO:
For a general purpose server, one must assume that the special case of
the I/O working out to be single threaded will not happen.  Multiple things
will be going on and the working set will be larger than the cache size.
A bit of overhead added to the "simple" cases will make the general
operation better.  Benchmarks, however, may well show this as slower
since some extra overhead had to be added.  Benchmarks would need to
become much more complex in order to show the real benefit or lack of
benefit for any one technique.

-- 
Michael Sinz -- Director of Research & Development, NextBus Inc.
mailto:michael.sinz@nextbus.com --------- http://www.nextbus.com
My place on the web ---> http://www.users.fast.net/~michael_sinz




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-scsi" in the body of the message



help

Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199910201401.KAA25925>