From owner-freebsd-questions Wed Oct 2 00:53:44 1996 Return-Path: owner-questions Received: (from root@localhost) by freefall.freebsd.org (8.7.5/8.7.3) id AAA18198 for questions-outgoing; Wed, 2 Oct 1996 00:53:44 -0700 (PDT) Received: from gatekeeper.barcode.co.il (gatekeeper.barcode.co.il [192.116.93.17]) by freefall.freebsd.org (8.7.5/8.7.3) with ESMTP id AAA18193 for ; Wed, 2 Oct 1996 00:53:39 -0700 (PDT) Received: (from nadav@localhost) by gatekeeper.barcode.co.il (8.7.5/8.6.12) id JAA24706; Wed, 2 Oct 1996 09:51:54 +0200 (IST) Date: Wed, 2 Oct 1996 09:51:54 +0200 (IST) From: Nadav Eiron To: Fabio Cesar Gozzo cc: questions@freebsd.org Subject: Re: Interleave size in CCD In-Reply-To: <199610011800.PAA01299@thomson.iqm.unicamp.br> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Content-Transfer-Encoding: QUOTED-PRINTABLE Sender: owner-questions@freebsd.org X-Loop: FreeBSD.org Precedence: bulk On Tue, 1 Oct 1996, Fabio Cesar Gozzo wrote: > Hello everybody, > =09=09I'm trying to concatenate 2 discs in my system (PPro, > AHA 2940, 2 SCSI 2GB each). The concatenated disk ccd0 will be used > for large (2GB) scratch files, i.e., intensive read/write process. > =09My question is: what would be a good value for interleave ? > =09Small values is good for read and bigger for write. But in this > case, I have both process. > =09Any hint would be much apreciated. >=20 >=20 > =09=09=09=09=09Fabio Gozzo > =09=09=09=09=09fabio@iqm.unicamp.br >=20 >=20 Well, here is my hint: I don't have any specific experience with ccd, but I've configured many=20 RAID systems (all sorts of hardware/software). The interleave (sometimes=20 referred to as the stripe size) in a RAID 0 array (striping) has nothing=20 to do with the balance of read/write operations. They are only related=20 when using parity, and then they are referred to as two separate=20 RAID=A0classes (RAID 3 vs. RAID 4/5), and even that's mostly irrelevant=20 now, as RAID controllers implement write-back caches. The tradeoff with stripe size (and mostly with RAID 3 vs. 5 decision) is whether you have a more-or-less single stream of requests to the disks, or many users concurrently on it, and the size of requests they make. Small stripe sizes work best if you have a single process generating relatively long I/O requests (like most DBMSs do). Small stripe size makes the array behave like a normal disk, but with double the transfer rate. If, on the other hand, you have many users, doing relatively short and random I/O's you'd be better off using large stripe size (larger than the longest request that will be made to the array). This would have the effect of=20 halving (on average) the access time, but leaving the transfer rate (for=20 each individual request) the same as that of a single disk, though the=20 total throughput will, of course, be doubled anyway. Large stripe sizes,=20 when there are many users, provide the best throughput. All in all, if you have one process accessing the disk, you'll probably=20 be better off with small stripe sizes, but there is always a tradeoff. If= =20 you care about performance and don't know the application well enough -=20 try it both ways! All this is said without any specific reference to the ccd driver. For=20 example, to make my theory correct it'll have to issue two concurrent=20 I/O's to member disks if each one is shorter than the stripe size and=20 they happen to belong to different member disks. Hope this helps, Nadav