From owner-freebsd-scsi Tue Apr 13 10:25: 6 1999 Delivered-To: freebsd-scsi@freebsd.org Received: from rt2.synx.com (tech.boostworks.com [194.167.81.239]) by hub.freebsd.org (Postfix) with ESMTP id 32B7215722 for ; Tue, 13 Apr 1999 10:24:54 -0700 (PDT) (envelope-from root@synx.com) Received: from synx.com (rn.synx.com [192.1.1.241]) by rt2.synx.com (8.9.1/8.9.1) with ESMTP id TAA36868; Tue, 13 Apr 1999 19:22:22 +0200 (CEST) Message-Id: <199904131722.TAA36868@rt2.synx.com> Date: Tue, 13 Apr 1999 19:22:19 +0200 (CEST) From: Remy Nonnenmacher Reply-To: remy@synx.com Subject: Re: Huge SCSI configs To: ken@plutotech.com Cc: freebsd-scsi@FreeBSD.ORG In-Reply-To: <199904131607.KAA03222@panzer.plutotech.com> MIME-Version: 1.0 Content-Type: TEXT/plain; CHARSET=US-ASCII Sender: owner-freebsd-scsi@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org On 13 Apr, Kenneth D. Merry wrote: >> .... >> OK. Sorry for this late (but ever unanswered question) : what are the >> known limits in term of SCSI chains, number and size of disks and sizes >> of filesystems ? Anyone ? > > Aww, come on, you can easily find out the SCSI stuff anywhere on the web. > For Wide SCSI, you can have up to 15 devices on one chain. (SCSI IDs 0 > through 15, with 7 reserved for the controller) With Ultra2 LVD, you can > have cable lengths of up to 12 meters. > Oops, lacked precision "under FreeBSD" !!. (I already proposed building a MAX config (15*2*15*50G) but they had me stop by knocking my head ;). No, seriously, i wouldn't like to find myself running honking into a wall just for not having asked. > There's no inherent limit on disk size, but you may want to talk to Matt > Jacob and Matt Dillon about the stability of very large filesystems. > Fine. I'll do it. >> >> Drives would be CCDed or Vinumed by groups of four, one per chain. Each >> drive divided into three parts, the outer, intermediate and inner >> cylinders to provide 3 different performance rings. > > Ahh, so you *are* looking for performance. > *AND* size... >> Due to environmental constraints, there are two prefered drives : IBM >> 18ES or Quantum Atlas4 all two 1 inches. > > Do yourself a favor and don't get the Atlas 4. So you need 1" high drives? > Yes. 18G is the best tradeoff between perfs/thermal/#heads/stacking. (BTW, I use Quantum and IBM and never got problems. Let me know if you got some with Quantum). > > And 50G drives would probably also be 1.6". > Da. (with ridiculous buffers 1M buffer where everybody is 2 and going 4). > > I think you'll certainly want 64-bit PCI, and probably multiple PCI busses. > I'm not sure whether you can get a 450NX box with two 64-bit PCI busses > (the chipset is capable, I don't know whether anyone makes a motherboard > with that particular configuration), but that might be a good thing to HP. (opened one, last week). > look for. In any case, with only 4 SCSI busses, I doubt you'll be able > to get the full performance out of your disks, but maybe the performance > you will get will be enough. > > Here are some numbers to think about: > > IBM Ultrastar 18ES Peak performance: 20MB/sec > 48 * Ultrastar 18ES: 960MB/sec > 4 x Ultra2 LVD Wide SCSI busses: 320MB/sec > Gigabit Ethernet theoretical peak: 125MB/sec > 64-bit, 33MHz PCI: ~266MB/sec > > So, you've got a couple of bottlenecks here. The first is that the total > disk bandwidth you'll have is about 3 times the amount of SCSI bus > bandwidth you'll have available. > > The second is that your two 3950U2's together would be capable of flooding > a 33MHz 64-bit PCI bus. > > You'll also run into memory bandwidth issues, processor speed issues and > various other things. It'll certainly be an interesting experiment. > > What sort of performance are you trying to get out of this thing anyway? > 100x10Mb/s video stream. In fact, the OS is only there to drive a video super-astro-stuff that is fairly good at pumping data, but is too stupid to even know where video data blocks are. The OS will have the role of driving the process of deciding where to store data to reduce disks movements (aside other little tasks, like driking beers with users). Peak perfs are important here but not really critical as some stream-extensive operations can be driven externally by the video pump. From a pure performance POV, this is not really cutting-edge depending on a good access repartition algo (100x10Mb ~ 125MB/s ~ 31MB/chain) and adding chains is easy. Need only to keep things busy. Using this stuff as real filesystems is only a matter of benchmarking and fun experiments seeking for limits (hence the GigaEthernet). To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-scsi" in the body of the message