From owner-freebsd-scsi Fri Dec 12 14:55:47 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.7/8.8.7) id OAA02972 for freebsd-scsi-outgoing; Fri, 12 Dec 1997 14:55:47 -0800 (PST) (envelope-from owner-freebsd-scsi) Received: from mail.kcwc.com (h1.kcwc.com [206.139.252.2]) by hub.freebsd.org (8.8.7/8.8.7) with SMTP id OAA02967 for ; Fri, 12 Dec 1997 14:55:42 -0800 (PST) (envelope-from curt@kcwc.com) Received: by mail.kcwc.com (NX5.67c/NeXT-2.0-KCWC-1.0) id AA09122; Fri, 12 Dec 97 17:55:37 -0500 Date: Fri, 12 Dec 97 17:55:37 -0500 From: curt@kcwc.com (Curt Welch) Message-Id: <9712122255.AA09122@mail.kcwc.com> Received: by NeXT.Mailer (1.87.1) Received: by NeXT Mailer (1.87.1) To: freebsd-scsi@FreeBSD.ORG Subject: Re: RAID on FreeBSD Sender: owner-freebsd-scsi@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk On Fri, 12 Dec 1997, Wilko Bulte wrote: > > As Tom wrote... > > > Well, if you are going to making one arrray of 9 drives, write > > > performance will bad. If you are going to making 3 arrays of 3 > > > > Why? Calculating the parity takes the same overhead in both cases. > > To do a write, you will hae to read the data back from > the other drives to calculate the parity. The more > drives, the more data you have to read back, and more > i/o you have to do to complete a write. You want to make > a tradeoff between parity storage overhead and write > performance. No. To do a write, you read the parity block and the old data block. The new parity block is calculated by xoring these two blocks with the new data block (or something like that). You then write the new data block and the new parity block. It's always two reads and two writes no matter how many drives you have in the array. But, when a drive fails, then the controller must read blocks from all drives whenever you want to read or write a single block that was on the failed drive. So larger arrays can have serious performance problems when a drive fails - and if you need a certain level of performance to keep running this could be important. Curt Welch