Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 13 Apr 1999 11:39:34 -0600 (MDT)
From:      "Kenneth D. Merry" <ken@plutotech.com>
To:        remy@synx.com
Cc:        freebsd-scsi@FreeBSD.ORG
Subject:   Re: Huge SCSI configs
Message-ID:  <199904131739.LAA03759@panzer.plutotech.com>
In-Reply-To: <199904131722.TAA36868@rt2.synx.com> from Remy Nonnenmacher at "Apr 13, 1999  7:22:19 pm"

next in thread | previous in thread | raw e-mail | index | archive | help
Remy Nonnenmacher wrote...
> On 13 Apr, Kenneth D. Merry wrote:
> >> ....
> >> OK. Sorry for this late (but ever unanswered question) : what are the
> >> known limits in term of SCSI chains, number and size of disks and sizes
> >> of filesystems ? Anyone ?
> > 
> > Aww, come on, you can easily find out the SCSI stuff anywhere on the web.
> > For Wide SCSI, you can have up to 15 devices on one chain.  (SCSI IDs 0
> > through 15, with 7 reserved for the controller)  With Ultra2 LVD, you can
> > have cable lengths of up to 12 meters.
> >
> 
> Oops, lacked precision "under FreeBSD" !!. (I already proposed building
> a MAX config (15*2*15*50G) but they had me stop by knocking my head ;).
> No, seriously, i wouldn't like to find myself running honking into a
>  wall just for not having asked.

Well, it's a good thing to ask!

> >> Due to environmental constraints, there are two prefered drives : IBM
> >> 18ES or Quantum Atlas4 all two 1 inches.
> > 
> > Do yourself a favor and don't get the Atlas 4.  So you need 1" high drives?
> >
> 
> Yes. 18G is the best tradeoff between perfs/thermal/#heads/stacking.
> (BTW, I use Quantum and IBM and never got problems. Let me know if you
> got some with Quantum).

Yes, there are certainly problems with Quantum disks.  The main problem is
that they continually return Queue Full, until we reduce the number of
transactions queued to the device to the minimum (2).

We get around this problem in the Atlas 2 and 3 by setting the minimum
number of transactions to 24.  It would probably be better if you just
avoid Quantum disks and go for IBM instead.

> > And 50G drives would probably also be 1.6".
> >
> Da. (with ridiculous buffers 1M buffer where everybody is 2 and going
> 4).
> > 
> > I think you'll certainly want 64-bit PCI, and probably multiple PCI busses.
> > I'm not sure whether you can get a 450NX box with two 64-bit PCI busses
> > (the chipset is capable, I don't know whether anyone makes a motherboard
> > with that particular configuration), but that might be a good thing to
> HP. (opened one, last week).
> > look for.  In any case, with only 4 SCSI busses, I doubt you'll be able
> > to get the full performance out of your disks, but maybe the performance
> > you will get will be enough.
> > 
> > Here are some numbers to think about:
> > 
> > IBM Ultrastar 18ES Peak performance:	20MB/sec
> > 48 * Ultrastar 18ES:			960MB/sec
> > 4 x Ultra2 LVD Wide SCSI busses:	320MB/sec
> > Gigabit Ethernet theoretical peak:	125MB/sec
> > 64-bit, 33MHz PCI:			~266MB/sec
> > 
> > So, you've got a couple of bottlenecks here.  The first is that the total
> > disk bandwidth you'll have is about 3 times the amount of SCSI bus
> > bandwidth you'll have available.
> > 
> > The second is that your two 3950U2's together would be capable of flooding
> > a 33MHz 64-bit PCI bus.
> > 
> > You'll also run into memory bandwidth issues, processor speed issues and
> > various other things.  It'll certainly be an interesting experiment.
> > 
> > What sort of performance are you trying to get out of this thing anyway?
> > 
> 
> 100x10Mb/s video stream. In fact, the OS is only there to drive a video
> super-astro-stuff that is fairly good at pumping data, but is too stupid
> to even know where video data blocks are. The OS will have the role of
> driving the process of deciding where to store data to reduce disks
> movements (aside other little tasks, like driking beers with users).
> 
> Peak perfs are important here but not really critical as some
> stream-extensive operations can be driven externally by the video pump.
> 
> >From a pure performance POV, this is not really cutting-edge depending
> on a good access repartition algo (100x10Mb ~ 125MB/s ~ 31MB/chain) and
> adding chains is easy. Need only to keep things busy.

Ahh, okay.  So your performance requirements aren't too bad.  You probably
won't be able to get 100 10Mb/sec streams of video out of one Gigabit
Ethernet interface, though.  You'll probably want to divide it over two
interfaces.

> Using this stuff as real filesystems is only a matter of benchmarking
> and fun experiments seeking for limits (hence the GigaEthernet).

Certainly sounds interesting.

Ken
-- 
Kenneth Merry
ken@plutotech.com


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-scsi" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199904131739.LAA03759>