Date: Tue, 13 Apr 1999 10:07:53 -0600 (MDT) From: "Kenneth D. Merry" <ken@plutotech.com> To: remy@synx.com Cc: freebsd-scsi@FreeBSD.ORG Subject: Re: Huge SCSI configs Message-ID: <199904131607.KAA03222@panzer.plutotech.com> In-Reply-To: <199904131453.QAA35693@rt2.synx.com> from Remy Nonnenmacher at "Apr 13, 1999 4:53:43 pm"
next in thread | previous in thread | raw e-mail | index | archive | help
Remy Nonnenmacher wrote... > On 12 Apr, Kenneth D. Merry wrote: > > Remy Nonnenmacher wrote... > >> I am looking for advices about building a fearly huge SCSI config. > >> > >> The config would be : > >> > >> - 4 SCSI chains (2x3950U2W planned) > > > > That should work okay, as long as you use -current or -stable *after* March > > 23rd. I've got one in my test box, and it seems to work fine. I haven't > > pushed it much, though. > > > > OK. Sorry for this late (but ever unanswered question) : what are the > known limits in term of SCSI chains, number and size of disks and sizes > of filesystems ? Anyone ? Aww, come on, you can easily find out the SCSI stuff anywhere on the web. For Wide SCSI, you can have up to 15 devices on one chain. (SCSI IDs 0 through 15, with 7 reserved for the controller) With Ultra2 LVD, you can have cable lengths of up to 12 meters. There's no inherent limit on disk size, but you may want to talk to Matt Jacob and Matt Dillon about the stability of very large filesystems. > > One thing to be careful about is what sort of slot you put this thing in. > > If you get a motherboard with only 32-bit slots, you need to make sure that > > the back end of the PCI slots is thin enough to handle a 64-bit PCI card. > > > >..... > > OK. Still looking for 64 bits PCI MB. Okay, that'll work. > >> - 12 18.2 GB per chain (48 totals disks) > > > > Well, first off, make sure you get IBM or Seagate, and make sure you get > > one of their high end drives. (not that they're making low-end 18 gig > > drives yet, AFAIK) I've had direct experience with the 18G Seagate > > Cheetah II's and IBM Ultrastar 18XP's. They both work fine. My guess is > > that the IBM Ultrastar 18ZX would work well, too. > > > > You should be okay with most any 18G IBM or Seagate disk. > > > > But 12 per chain? Assuming these are all Ultra2 LVD, you're still pushing > > things a bit as far as SCSI bus bandwidth is concerned. For instance, the > > IBM Ultrastar 18ZX runs at about 23MB/sec on the outer tracks according to > > IBM's web site. > > > > With that sort of performance, you wouldn't be able to get maximum > > performance out of the disks if had more than 3 on an Ultra 2 chain. > > > > Drives would be CCDed or Vinumed by groups of four, one per chain. Each > drive divided into three parts, the outer, intermediate and inner > cylinders to provide 3 different performance rings. Ahh, so you *are* looking for performance. > Due to environmental constraints, there are two prefered drives : IBM > 18ES or Quantum Atlas4 all two 1 inches. Do yourself a favor and don't get the Atlas 4. So you need 1" high drives? > > You'll also have to start worrying about PCI bus bandwidth and memory > > bandwidth, depending on what sort of motherboard you get. > > > > So, one question I have is this -- are you looking for maximum disk > > performance, or just a lot of disk space? If you're just looking for a lot > > of disk space, why not go with 36GB drives? NECX (www.necx.com) is selling > > IBM Ultrastar 36XP's for $1400. The Ultrastar 18ES is selling for $775. > > So, it would be cheaper to go with a 36G drive. (FWIW, I know that the > > 36XP's work just fine, but I haven't seen any 18ES drives yet. I'd imagine > > they work fine as well.) > > > > 36.4 Gigs are all 1.6 inches and won't (probably) fit. 50.1 Gigs are > too youngs (bad exp with Seagate on early drives, and probably > unaffordable) and would lead to less R/W heads. And 50G drives would probably also be 1.6". > >> - SMP (Quad-Xeon or Bi-P2) > > > > Considering the monster you're trying to build, I'd say go for a board > > that'll get you 64-bit PCI and high memory bandwidth. My guess is that a > > Quad Xeon board from Intel might do the trick. Ask Mike Smith about these, > > they had (have??) one at Walnut Creek, I think. > > > > Thanks for you reply. It seems pretty clear that PCI bandwidth will be > the real problem. I know for sure that it have been a problem on the > GigaEth. case and it will be a botleneck point with two (or more) fast > SCSI cards running high-end drives. > > Also, Xeon machines do not appear to be the real killers Intel pretends > but SC450NX (or the HP version using 64 bit PCI) machines goes pretty > well in my (rackable) constraints. I think you'll certainly want 64-bit PCI, and probably multiple PCI busses. I'm not sure whether you can get a 450NX box with two 64-bit PCI busses (the chipset is capable, I don't know whether anyone makes a motherboard with that particular configuration), but that might be a good thing to look for. In any case, with only 4 SCSI busses, I doubt you'll be able to get the full performance out of your disks, but maybe the performance you will get will be enough. Here are some numbers to think about: IBM Ultrastar 18ES Peak performance: 20MB/sec 48 * Ultrastar 18ES: 960MB/sec 4 x Ultra2 LVD Wide SCSI busses: 320MB/sec Gigabit Ethernet theoretical peak: 125MB/sec 64-bit, 33MHz PCI: ~266MB/sec So, you've got a couple of bottlenecks here. The first is that the total disk bandwidth you'll have is about 3 times the amount of SCSI bus bandwidth you'll have available. The second is that your two 3950U2's together would be capable of flooding a 33MHz 64-bit PCI bus. You'll also run into memory bandwidth issues, processor speed issues and various other things. It'll certainly be an interesting experiment. What sort of performance are you trying to get out of this thing anyway? Ken -- Kenneth Merry ken@plutotech.com To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-scsi" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199904131607.KAA03222>