Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 19 Feb 1995 14:04:06 -0800 (PST)
From:      "Justin T. Gibbs" <gibbs@estienne.CS.Berkeley.EDU>
To:        toor@jsdinc.root.com (John S. Dyson)
Cc:        gibbs@implode.root.com, toor@Root.COM, bde@zeta.org.au, bde@freefall.cdrom.com, CVS-commiters@freefall.cdrom.com, cvs-sys@freefall.cdrom.com
Subject:   Re: cvs commit: src/sys/sys buf.h
Message-ID:  <199502192204.OAA06203@estienne.cs.berkeley.edu>
In-Reply-To: <199502192052.PAA01558@jsdinc> from "John S. Dyson" at Feb 19, 95 03:52:13 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> 
> > 
> > Without looking at increasing the max I/O size, will there be a perfomance
> > gain in this approach?  If so, it becomes cheep and easy to change the 
> > max I/O size based on benchmarks and individual needs.  Will say doubling
> > the max to 128k further compromise real time performance?  What percentage
> > of I/O transactions will even approach this size?
> > 
> > -- 
> > Justin T. Gibbs
> > ==============================================
> > TCS Instructional Group - Programmer/Analyst 1
> >   Cory | Po | Danube | Volga | Parker | Torus
> > ==============================================
> > 
> 
> This is my worry, not that I *actually* know the following to be TRUE:
> 
> Well, it appears that on an IDE drive there is the >possibility< for it to
> stream data for a pretty long while.  It would cause the system to be I/O
> (interrupt) bound (because of the slow ISA bus) for a long time.  There is the
> possibility for this I/O operation to last approx 64K * ??usecs/transfer :-(.
> (Bruce knows more about ISA bus timing than I do, but I guess that it is about
> .5usecs??? per word or more).  That is a long, long time.  The kernel cannot
> do much about it once the (long) I/O operation is queued. 
> 
> Bus mastering SCSI worries me much less, but still is a concern (as drives
> get faster and the old ISA-bus stays the same speed) :-).
> 
> John
> dyson@root.com

What prevents us from making the I/O transaction braketing device specific?
The filesystem creates a one meg transaction.  This is then broken down
by the device specific strategy routine into chunks that make sence for
the device (the linked list data structure makes this easy).  This was my 
hope anyway since, for example, the SCSI subsystem is the only layer that 
could know enough about a particular controller to make a good decision on I/O 
size.  For wd devices, we could leave the max at 64k (or even less?), but as 
Bruce points out, even for PIO devices, decending the list would be faster 
than the upfront "virtually contiguous" clustering scheme currently in place.  
I think if constructed properly, we can have our cake and eat it too.

-- 
Justin T. Gibbs
==============================================
TCS Instructional Group - Programmer/Analyst 1
  Cory | Po | Danube | Volga | Parker | Torus
==============================================



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199502192204.OAA06203>