From owner-freebsd-hackers Tue Oct 27 20:25:25 1998 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id UAA17035 for freebsd-hackers-outgoing; Tue, 27 Oct 1998 20:25:25 -0800 (PST) (envelope-from owner-freebsd-hackers@FreeBSD.ORG) Received: from sparks.net (gw.sparks.net [209.222.120.18]) by hub.freebsd.org (8.8.8/8.8.8) with SMTP id UAA17029 for ; Tue, 27 Oct 1998 20:25:19 -0800 (PST) (envelope-from david@sparks.net) Received: from david by sparks.net with smtp (Exim 1.62 #5) id 0zYN8g-0006Cw-00; Tue, 27 Oct 1998 23:23:18 -0500 Date: Tue, 27 Oct 1998 23:23:17 -0500 (EST) From: To: Joe Greco cc: hackers@FreeBSD.ORG, mlnn4@oaks.com.au Subject: Re: Multi-terabyte disk farm In-Reply-To: <199810221747.MAA26363@aurora.sol.net> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG On Thu, 22 Oct 1998, Joe Greco wrote: > If you do the raid5 thing, I would recommend clustering drives in a four- > plus-one pattern that gives you three filesystems of 72 or 168GB each per > SCSI chain. This effectively limits the amount of data you need to fsck, > and the amount of potential loss in case of catastrophic failure. It > does drive the price up a bit. Why four + 1? Maybe performance isn't an issue, but don't you stand a real chance of lots of superblocks being on a single drive? 7 + 1 would be a nice number with most rack mount chassis which hold 8 drives. Something I did when hooking up something like this at a previous job was a four + 1 setup (the mylex raid controllers had 5 channels, I didn't have any choice:) where each of the five channels was an independent channel on an external RAID controller. Each channel went to a seperate rack mount chassis, so even if I lost a chassis cable/power supply the thing was still running OK. In that installation performance was an issue, so I hooked 20 drives each up to two raid controllers on two seperate wide scsi busses. However, there's no reason why 40 drives couldn't be connected to a single raid controller (2,500-4,000 or so), for a total of 576 effective GB. With CCD they could even be configured as a single disk drive. Or maybe not, any ufs experts around? > You may not want to start out using vinum, which is relatively new and > untested. I love watching the development, but it's a little new for me to bet *my* job on it:) Besides, external RAID is easily cost justified in a project like this. Hope this helps someone:) --- David Miller ---------------------------------------------------------------------------- It's *amazing* what one can accomplish when one doesn't know what one can't do! To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message