Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 2 Oct 1996 09:51:54 +0200 (IST)
From:      Nadav Eiron <nadav@barcode.co.il>
To:        Fabio Cesar Gozzo <fabio@thomson.iqm.unicamp.br>
Cc:        questions@freebsd.org
Subject:   Re: Interleave size in CCD
Message-ID:  <Pine.BSF.3.91.961002093856.24687A-100000@gatekeeper.barcode.co.il>
In-Reply-To: <199610011800.PAA01299@thomson.iqm.unicamp.br>

index | next in thread | previous in thread | raw e-mail



On Tue, 1 Oct 1996, Fabio Cesar Gozzo wrote:

> Hello everybody,
> 		I'm trying to concatenate 2 discs in my system (PPro,
> AHA 2940, 2 SCSI 2GB each). The concatenated disk ccd0 will be used
> for large (2GB) scratch files, i.e., intensive read/write process.
> 	My question is: what would be a good value for interleave ?
> 	Small values is good for read and bigger for write. But in this
> case, I have both process.
> 	Any hint would be much apreciated.
> 
> 
> 					Fabio Gozzo
> 					fabio@iqm.unicamp.br
> 
> 

Well, here is my hint:
I don't have any specific experience with ccd, but I've configured many 
RAID systems (all sorts of hardware/software). The interleave (sometimes 
referred to as the stripe size) in a RAID 0 array (striping) has nothing 
to do with the balance of read/write operations. They are only related 
when using parity, and then they are referred to as two separate 
RAID classes (RAID 3 vs. RAID 4/5), and even that's mostly irrelevant 
now, as RAID controllers implement write-back caches.

The tradeoff with stripe size (and mostly with RAID 3 vs. 5 decision) is
whether you have a more-or-less single stream of requests to the disks, or
many users concurrently on it, and the size of requests they make. Small
stripe sizes work best if you have a single process generating relatively
long I/O requests (like most DBMSs do). Small stripe size makes the array
behave like a normal disk, but with double the transfer rate. If, on the
other hand, you have many users, doing relatively short and random I/O's
you'd be better off using large stripe size (larger than the longest
request that will be made to the array). This would have the effect of 
halving (on average) the access time, but leaving the transfer rate (for 
each individual request) the same as that of a single disk, though the 
total throughput will, of course, be doubled anyway. Large stripe sizes, 
when there are many users, provide the best throughput.

All in all, if you have one process accessing the disk, you'll probably 
be better off with small stripe sizes, but there is always a tradeoff. If 
you care about performance and don't know the application well enough - 
try it both ways!


All this is said without any specific reference to the ccd driver. For 
example, to make my theory correct it'll have to issue two concurrent 
I/O's to member disks if each one is shorter than the stripe size and 
they happen to belong to different member disks.

Hope this helps,
Nadav


help

Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.3.91.961002093856.24687A-100000>