Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 2 Sep 1995 13:24:46 -0700 (PDT)
From:      Julian Elischer <julian@ref.tfs.com>
To:        rgrimes@gndrsh.aac.dev.com (Rodney W. Grimes)
Cc:        vernick@CS.SunySB.EDU, freebsd-hackers@FreeBSD.org
Subject:   Re: 4GB Drives
Message-ID:  <199509022024.NAA05005@ref.tfs.com>
In-Reply-To: <199509021333.GAA15713@gndrsh.aac.dev.com> from "Rodney W. Grimes" at Sep 2, 95 06:33:26 am

next in thread | previous in thread | raw e-mail | index | archive | help
> 
> > 
> > >You see, in modern workstation disk drives you have something called
> > >spindle sync.  Well, when you set up spindle sync you have 2 modeselect
> > >values you tweak.  One bit says who is the sync master and who are
> > >the sync slaves.  Then for each slave drive you tweak another value
> > >that is used to offset the spindles from perfect sync so that the I/O
> > >of block zero of a track on drive 0 of a stripe set has just finished
> > >the scsi bus transfer when block zero of a track on drive 1 is about to
> > >come under the heads.
> > 
> > Why do you want the data under the heads when the SCSI bus becomes
> > free? Wouldn't you rather have the data already in the disk cache?  If
> > the bus is free and the disk is transferring from the medium and out
> > over the bus, the bottleneck is the disk transfer rate.  However if
> > the data had already been in the cache it can go at SCSI bus speeds.
> 
> Your thinking to simple, if the data was in the cache when the I/O
> hit the drive it would immediatly go to data phase preventing me from
> issuing another I/O to the next drive.  I actually _dont_ want the
> drive to come on the bus yet.  I need to be able to get all transfer
> operations to the drives before _any_ drive grabs the bus for a data
> transfer.  This causes all drives to operate as a parallel unit.
> 
> To simplify my problem with this I have gone to one drive per controller
> for developement purposes, but am still trying to work the issue of how
> do I make 4 drives on one bus all operate such that I can get the commands
> to all 4 drives before anyone of them grabs the bus for what is a rather
> long data transfer phase.
you can issue a SCSI seek command first to all the devices
then issue a read as a second round of commands
that way they will all give you instant response on the read

seek1
seek2
seek3
read1-data-phase
read2-data-phase
read3-data-phase.



seek is command 2B, though it is an optional command..

julian
>  
> > As for the 85% bandwidth over the SCSI you have achieved, that is the
> > maximum that I can get.
> 
> I am using more than 1 bus, each bus itself is only seeing a 44% load.
> 
> > Rather than worry about seeks and latency
> > delays I simply request the same data over and over from the disks.  I
> > bypass the file system code and make sure each request (64K, or 128
> > sectors) goes back to the disk.  However, the data is already in the
> > disk cache thus incurring no seeks, nor rotation delays.  With 3,4 and
> > 5 disks on a single controller it maxes out at 8.5MBsec.  Thus, the
> > controller, disk and bus overhead must account for the other 15%.  If
> > you can get rid of that overhead, let me know. :)
> 
> I can't eliminate that over head, but I sure can make it seem to be gone
> by using a pipeline effect.  Problem is getting the timing of things
> right so that it is really a pipeline is rather difficult.
> 
> I am not going to continue this thread much in public, it is takeing
> my time away from doing the research.  Let it be known that I am rather
> deaply intrenched in this stuff and most of what is being talked about
> here I have a pretty much covered myself.  I also have a bit of background
> in the hardware way of doing this stuff ala Auspex boxes where much of
> this technology is buried in the firmware of thier disk procesing
> boards (not really a controller by anyones standards, these are very
> smart boards).
> 
> I am attempting to duplicate and reverse engineer as much of this technology
> as I can into a software model that will work with a specific controller/
> set of hard drives initially and then expand upon that model to generalize
> it a bit, but it will always take deep study of the specific drives and
> controllers to properly tune it to pipeline correctly and operate at
> maxium utilization.  It involves the study of latencies at all points
> of the pipe and control over the sequence of events to make it all humm.
> 
> It may seem at times that I am asking about how to do some really counter
> performance types of things (turning off tag queueing and
> disconnect/reconnect) but this is being done to study fundemental latency
> delays without the side effects of these features.  It is not my intention
> to leave them off in the final model.  But without methods to study these
> fundemental things it makes derivation of good models very hard.
> 
> I will probably have to use a software queueing modeling system to study
> the more complex effects these will have once they are understood at the
> fundemental level to derive full models.
> 
> -- 
> Rod Grimes                                      rgrimes@gndrsh.aac.dev.com
> Accurate Automation Company                 Reliable computers for FreeBSD
> 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199509022024.NAA05005>