From owner-freebsd-scsi Tue Apr 13 12:36:56 1999 Delivered-To: freebsd-scsi@freebsd.org Received: from aurora.sol.net (aurora.sol.net [206.55.65.76]) by hub.freebsd.org (Postfix) with ESMTP id 4A96115180 for ; Tue, 13 Apr 1999 12:36:47 -0700 (PDT) (envelope-from jgreco@aurora.sol.net) Received: (from jgreco@localhost) by aurora.sol.net (8.9.2/8.9.2/SNNS-1.02) id OAA08960; Tue, 13 Apr 1999 14:47:12 -0500 (CDT) From: Joe Greco Message-Id: <199904131947.OAA08960@aurora.sol.net> Subject: Re: Huge SCSI configs To: remy@synx.com, ken@plutotech.com, freebsd-scsi@FreeBSD.ORG Date: Tue, 13 Apr 1999 14:47:11 -0500 (CDT) X-Mailer: ELM [version 2.4ME+ PL43 (25)] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-freebsd-scsi@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org > > I am looking for advices about building a fearly huge SCSI config. Yeah, that's fearly all right. :-) > > The config would be : > > > > - 4 SCSI chains (2x3950U2W planned) > > That should work okay, as long as you use -current or -stable *after* March > 23rd. I've got one in my test box, and it seems to work fine. I haven't > pushed it much, though. > > One thing to be careful about is what sort of slot you put this thing in. > If you get a motherboard with only 32-bit slots, you need to make sure that > the back end of the PCI slots is thin enough to handle a 64-bit PCI card. > > There was one machine that I tried to put my 3950 into that it wouldn't fit > in, because the back end of the slot was too thick. It worked fine in two > other machines, though. > > > - 12 18.2 GB per chain (48 totals disks) > > Well, first off, make sure you get IBM or Seagate, and make sure you get > one of their high end drives. (not that they're making low-end 18 gig > drives yet, AFAIK) I've had direct experience with the 18G Seagate > Cheetah II's and IBM Ultrastar 18XP's. They both work fine. My guess is > that the IBM Ultrastar 18ZX would work well, too. > > You should be okay with most any 18G IBM or Seagate disk. > > But 12 per chain? Assuming these are all Ultra2 LVD, you're still pushing > things a bit as far as SCSI bus bandwidth is concerned. For instance, the > IBM Ultrastar 18ZX runs at about 23MB/sec on the outer tracks according to > IBM's web site. > > With that sort of performance, you wouldn't be able to get maximum > performance out of the disks if had more than 3 on an Ultra 2 chain. Another thing to consider is data loss. Can you afford to lose a disk? I've been _very_ lucky with machines running 28-30 disks, simply CCD'd together. I think the drives are old enough that I should expect to have several fall-over-n-die's in the coming year on this one here. If you are more interested in reliability, consider getting a RAID controller. I've been playing with a Mylex DAC960SX myself, really slick unit, which I didn't expect since I've had bad experiences with media translators in the past. The thing just magically works and shows up as a disk to the OS. I don't have to worry about setting up CCD, making all those device nodes, etc. It just appears to be one big disk... da1 at ahc0 bus 0 target 1 lun 0 da1: Fixed Direct Access SCSI-2 device da1: 40.0MB/s transfers (20.0MHz, offset 16, 16bit), Tagged Queueing Enabled da1: 138928MB (284524544 512 byte sectors: 255H 63S/T 17710C) da0 at ahc0 bus 0 target 0 lun 0 da0: Fixed Direct Access SCSI-2 device da0: 40.0MB/s transfers (20.0MHz, offset 15, 16bit), Tagged Queueing Enabled da0: 4148MB (8496884 512 byte sectors: 255H 63S/T 528C) I'm getting 10MB/s read/write speeds from it. I'd expect faster read speeds, but I think that is a matter of some tuning, and I've optimized it for parallel transactions anyways so maybe that is just fine. Yeah, there we go, on parallel processes I see the speed. tty da1 cpu tin tout KB/t tps MB/s us ni sy in id 0 64 64.00 396 24.75 0 0 2 2 97 0 115 64.00 393 24.57 0 0 2 2 96 0 21 64.00 395 24.69 0 0 2 4 94 0 1340 64.00 394 24.63 1 0 2 2 95 0 21 64.00 397 24.81 0 0 5 1 93 0 400 63.88 393 24.52 0 0 4 3 93 0 21 63.10 395 24.34 0 0 2 2 96 0 356 63.64 397 24.67 1 0 3 1 95 ^^!!!! I have no idea what is eating all that CPU. Heh. The machine in question is an ASUS P2B-DS with onboard SCSI, 512MB RAM, and 2x PII-400. The array is a dozen 18GB Seacrate Barracuda F/W's, two spare and ten participating in a RAID5 with all the parameters on the RAID controller loosened to the max. (Unfortunately, they are loosened in the direction of what I need for a news server, so this might not be an accurate picture of what the thing is actually capable of.) This simplifies server design greatly, because you can simply build a machine with one great SCSI controller and hook up a RAID controller and be done. If you need additional speed, you can always build it as two or four smaller independent arrays, I suppose. Now, obviously, if you need mega speed, this isn't the way to go. However, for some sort of network server, 25MB/s exceeds what you can put out on a 100Mbps Ethernet segment, and is a good percentage of what you could do with gigabit ethernet. Do you really need gigabit ethernet? Or would it be more sensible to do a quad-port 100Mbps ethernet card, a more tried and known technology? (I recommend the Adaptec ANA6944 myself). I'd reconsider what you are trying to accomplish, and then build a machine to fit. If it were me and I needed lots of read speed but could trade off write speed for additional availability, I'd build eight chains of 7 disks each, put two on each of four Mylex controllers, and then only have to worry about how to hook up four fast drive arrays to the PC. I've been satisfied with the performance afforded by the P2B-DS, but I'm not really moving a hundred megabytes of data per second around. A machine with lots of memory bandwidth would be ideal. You might be able to go with fewer Mylexes but I'm not too sure how far one can push them. FW SCSI is limited to 40MB/s, and if I'm doing 25 already, clearly there's a limit. But I think even with the suggested model above, you're probably going to have some Fun trying to deal with moving 100MB/s through most PC architecture machines. I'm waiting for the first person to tell me how you should be able to get 800MB/s++ out of such an array, heh... while the drives might be theoretically capable of it, the SCSI busses and PC just won't be able to deal with it. I'm just waiting for someone to come up with a driver for the new Fore ATM card, the ForeRunnerHE 622. I like the larger packet size I can do with ATM. :-) ... Joe ------------------------------------------------------------------------------- Joe Greco - Systems Administrator jgreco@ns.sol.net Solaria Public Access UNIX - Milwaukee, WI 414/342-4847 To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-scsi" in the body of the message