From owner-freebsd-hackers Thu Aug 5 21:41:29 1999 Delivered-To: freebsd-hackers@freebsd.org Received: from mail.du.gtn.com (mail.du.gtn.com [194.77.9.57]) by hub.freebsd.org (Postfix) with ESMTP id 2A0C7155A7 for ; Thu, 5 Aug 1999 21:41:17 -0700 (PDT) (envelope-from ticso@cicely8.cicely.de) Received: from cicely7.cicely.de (cicely.de [194.231.9.142]) by mail.du.gtn.com (8.8.6/8.8.6) with ESMTP id GAA06844; Fri, 6 Aug 1999 06:40:01 +0200 (MET DST) Received: from cicely8.cicely.de (cicely8.cicely.de [10.1.2.10]) by cicely7.cicely.de (8.9.0/8.9.0) with ESMTP id GAA62354; Fri, 6 Aug 1999 06:39:35 +0200 (CEST) Received: (from ticso@localhost) by cicely8.cicely.de (8.9.3/8.9.2) id GAA30861; Fri, 6 Aug 1999 06:40:18 +0200 (CEST) (envelope-from ticso) Date: Fri, 6 Aug 1999 06:40:18 +0200 From: Bernd Walter To: Greg Lehey Cc: Bernd Walter , Stephen Hocking-Senior Programmer PGS Tensor Perth , hackers@FreeBSD.ORG Subject: Re: Adding disks -the pain. Also vinum Message-ID: <19990806064017.A30780@cicely8.cicely.de> References: <19990803133554.S62948@freebie.lemis.com> <199908030416.MAA16945@ariadne.tensor.pgs.com> <19990803081216.B23148@cicely8.cicely.de> <19990803155945.W62948@freebie.lemis.com> <19990803232044.A25368@cicely8.cicely.de> <19990806105354.T5126@freebie.lemis.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Mailer: Mutt 0.95.3i In-Reply-To: <19990806105354.T5126@freebie.lemis.com>; from Greg Lehey on Fri, Aug 06, 1999 at 10:53:54AM +0930 Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG On Fri, Aug 06, 1999 at 10:53:54AM +0930, Greg Lehey wrote: > On Tuesday, 3 August 1999 at 23:20:45 +0200, Bernd Walter wrote: > > On Tue, Aug 03, 1999 at 03:59:46PM +0930, Greg Lehey wrote: > >> On Tuesday, 3 August 1999 at 8:12:17 +0200, Bernd Walter wrote: > >> > >>> For UFS/FFS there is nothing worth seting the stripesize to low. > >>> It is generally slower to acces 32k on different HDDs than to acces 64k on > >>> one HDD. > >> > >> It is always slower where the positioning time is greater than the > >> transfer time for 32 kB. On modern disks, 32 kB transfer in about 300 > >> µs. The average rotational latency of a disk running at 10,800 rpm is > >> 2.8 ms, and even with spindle synchronization there's no way to avoid > >> rotational latency under these circumstances. > > > > It shouldn't be the latency, because with spindlesync they are the same > > on both disks if the transfer is requested exactly the same time what > > is of course idealized.. > > Spindle sync ensures that the same sectors on different disks are > under the heads at the same time. When you perform a stripe transfer, > you're not accessing the same sectors, you're accessing different > sectors. There's no way to avoid rotational latency under these > circumstances. We are talking about the same point with the sme results. I agree you will only access the same sectors in some special cases. Lets say 2 Striped disks with 512 Byte stripes and FSS with 1k Frags. > > > The point is that you have more then a single transfer. With small > > transfers spindle sync is able to winback some of the performance > > you have lost with a to small stripe size. > > No, this isn't correct, unless you're running 512 byte stripes. In That's what I meant with a 'to small stripe size' > this case, a single-stripe transfer of, say, 8 kB with the disks above > would take about 7 ms total latency (same as with a single disk), but > the transfer would take less time--5 µs instead of 80 µs. You'd need > 16 disks, and you'd tie them all up for 7 ms. And this doesn't > consider the times of SCSI command setup and such. In the rare case you need max bandwith for only one Aplication and one stream I like to hear that all drives are tied up in the job. > > Basically, this is not the way to go if you have multiple clients for > your storage. Look at http://www.lemis.com/vinum/problems.html and > http://www.lemis.com/vinum/Performance-issues.html and for more > details. > > >>> Spindle Sycronisation won't bring you that much on modern HDDs - I tried > >>> it using 5 Seagate Elite 2.9G (5,25" Full-Height). > >> > >> It should be useful for RAID-3 and streaming video. > > > > I case of large transfers it will make sense - but FFS is unable to set > > up big enough requests. > > No, this is a case where you're only using one client, so my > argumentation above doesn't apply (since you're reading sequentially, > so latency is no longer an issue). I don't know what bandwith streaming video needs, but If you need sdditional bandwith of all used disks the first thing to do is linearising access to the disks. Multifileaccess often breaks linearisation. All what I trid to say is that it is hopeless to expect much more bandwith than a single disk in single process access. As an example: Yesterday I was asked if 6 old striped disks would be faster for cvsup than one of his modern disks because it sometime needs more than one telephone unit. The answer is no. cvsupd (if run regulary) spends most of its time sending The directory content of the destination. Usually there are no other programms accessng any disks at the same time, so you can benefit only a very small bit from additional drives. Maybe the additional block cache on the drives and for updating atime. Beleave it or not multiple files are accessed in servers and maybe under some windomanagers, but on many home and desktop machines it happens only rarely. I personaly use as an example 7 200M IBM Disks striped to one volume (They all have LEDs :). The only way for to utilize nearly all in a sensefull way is writing with softupdates enabled. -- B.Walter COSMO-Project http://www.cosmo-project.de ticso@cicely.de Usergroup info@cosmo-project.de To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message