From owner-freebsd-scsi Fri Dec 12 23:36:35 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.7/8.8.7) id XAA02243 for freebsd-scsi-outgoing; Fri, 12 Dec 1997 23:36:35 -0800 (PST) (envelope-from owner-freebsd-scsi) Received: from iafnl.es.iaf.nl (uucp@iafnl.es.iaf.nl [195.108.17.20]) by hub.freebsd.org (8.8.7/8.8.7) with SMTP id XAA02238 for ; Fri, 12 Dec 1997 23:36:32 -0800 (PST) (envelope-from wilko@yedi.iaf.nl) Received: by iafnl.es.iaf.nl with UUCP id AA04608 (5.67b/IDA-1.5 for freebsd-scsi@FreeBSD.ORG); Sat, 13 Dec 1997 08:36:47 +0100 Received: (from wilko@localhost) by yedi.iaf.nl (8.8.5/8.6.12) id AAA03706; Sat, 13 Dec 1997 00:12:38 +0100 (MET) From: Wilko Bulte Message-Id: <199712122312.AAA03706@yedi.iaf.nl> Subject: Re: RAID on FreeBSD To: tom@sdf.com (Tom) Date: Sat, 13 Dec 1997 00:12:38 +0100 (MET) Cc: walcaraz@indy3.gstone.com, freebsd-scsi@FreeBSD.ORG In-Reply-To: from "Tom" at Dec 12, 97 01:55:16 pm X-Organisation: Private FreeBSD site - Arnhem, The Netherlands X-Pgp-Info: PGP public key at 'finger wilko@freefall.freebsd.org' X-Mailer: ELM [version 2.4 PL24 ME8a] Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-freebsd-scsi@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk As Tom wrote... > > On Fri, 12 Dec 1997, Wilko Bulte wrote: > > > As Tom wrote... > > > > > On Thu, 11 Dec 1997, Wilko Bulte wrote: > > > > > > > > 9 drives in an uncomfortable number for RAID5. Probably better to go > > > > > > > > Why would 9 drives be uncomfartable? > > > > > > Well, if you are going to making one arrray of 9 drives, write > > > performance will bad. If you are going to making 3 arrays of 3 > > > > Why? Calculating the parity takes the same overhead in both cases. > > To do a write, you will hae to read the data back from the other drives > to calculate the parity. The more drives, the more data you have to read > back, and more i/o you have to do to complete a write. You want to make a > tradeoff between parity storage overhead and write performance. Not true. You don't have to read data from all the drives. You can satisfy a write by doing: 1. read datablock where the new data needs to go (target block) (drive A) 2. read corresponding parity block (drive B) 3. remove contribution to the parity block by exor-ing data from step 1 and 2. Keep the result. 4. create new parity by exor-ing the result of step 3 and the new data 5. write the parity block (to drive B) 6. write the new data (to drive A) What results is 2 reads from different spindles, and 2 writes to different spindles. Drive A and B can be parallelised pretty well with SCSI. The situation deteriorates if the write operation crosses chunk boundaries, you essentially have to repeat the recipe above. The trick with RAID5 is that you have to ensure you don't update data OR parity but you always have to do BOTH. If you have a crash of some sort when one of them has been written but the other has not you are really stuck. This is called the write hole. In software (host based) raid solutions this tends to be swept under the carpet. > > each channel. This allows a channel to die completely without loosing your > > data. > > The question was about a 9 drive setup, that presumably going to be put > into a single RAID5 array. How would you arrange 9 drives? Ideally on 9 channels ;-) > > I don't agree. It *really* depends on the hardware you're using. E.g > > the company I work for (DEC) sells the HSZx0 range of controllers. > > This controller has (along with battery backup writeback cache) 6 SCSI > > device buses. The 'natural' number for that one is 6 drives. > > Yep, I knew you work DEC, and I knew you were probably baiting me. ;-) > 5 is closer to 6, than to 9. So I guess you are closer to agreeing with > me, than with a 9 drive RAID5 array. > > I guess the HSZx0 series, is a SCSI-to-SCSI system? Of course that > imposes certain limitations as well. Yep, the later ones are Ultrawide differential SCSI to the host, and Ultra wide single ended to the disks. HSGx0 is FibreChannel host, to UW disks. There are of course limitations, for one more than 14 drives in one raid5 set are the max. number in a set. Wilko _ ______________________________________________________________________ | / o / / _ Bulte email: wilko @ yedi.iaf.nl http://www.tcja.nl/~wilko |/|/ / / /( (_) Arnhem, The Netherlands - Do, or do not. There is no 'try' ---------------- Support your local daemons: run [Free,Net]BSD Unix ------