Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 19 Feb 1995 11:18:17 -0800 (PST)
From:      "Justin T. Gibbs" <gibbs@estienne.CS.Berkeley.EDU>
To:        toor@jsdinc.root.com (John S. Dyson)
Cc:        bde@zeta.org.au, bde@freefall.cdrom.com, CVS-commiters@freefall.cdrom.com, cvs-sys@freefall.cdrom.com
Subject:   Re: cvs commit: src/sys/sys buf.h
Message-ID:  <199502191918.LAA06025@estienne.cs.berkeley.edu>
In-Reply-To: <199502191755.MAA00244@jsdinc> from "John S. Dyson" at Feb 19, 95 12:55:02 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> 
> > 
> > I'd like to see it used as an alternative method for clustering.  Any
> > device doing DMA needs the physical address of pages within a MAXPHYS
> > buffer, so you end up looping through it.  Why bother sticking block
> > sized buffers into a virtually congious cluster buffer?  Why not just link 
> > them together to form the transaction?  This also allows controllers like
> > the 27/28/2942 to do transactions up to 1meg in size (you'd want to limit
> > it below that to prevent it from hogging the SCSI bus).  What would be even
> > better is some knowledge of what type of device you are talking to so that
> > if it happens to be PIO, you can give it virtually contiguous chunks of the
> > right size.
> > 
> I agree with Justin, we simply "improved" upon the original 4.4 clustering
> code, but his notion of chained buffers (that he had mentioned to me a
> couple of months ago) appears to be very good.  The code that we currently
> have has an upper limit of 64K on the cluster size.  The problem is that
> with the current scheme, there is a system wide upper limit.  Some devices
> could probably benefit from increased cluster sizes.  However, we might need
> to be able to tune down the maximum I/O transfer sizes (esp, non
> bus-mastering stuff), for realtime performance.  FreeBSD is being used
> in a couple of fairly time-critical applications, and it would be nice
> someday to be able to support nearly real-time (1-5msecs at least) secheduling.
> Not that we even approach that now (with page table pre-faulting and future IDE
> multi-block clustering.)

Without looking at increasing the max I/O size, will there be a perfomance
gain in this approach?  If so, it becomes cheep and easy to change the 
max I/O size based on benchmarks and individual needs.  Will say doubling
the max to 128k further compromise real time performance?  What percentage
of I/O transactions will even approach this size?

> 
> All I am saying is that it would be nice to be able to make sure that the
> new scheme is compatible with pseudo-real-time kernel performance.  Perhaps
> the clustering code is not the right-place to make sure that real-time
> performance is not further compromised????? 
> 
> John
> dyson@root.com

-- 
Justin T. Gibbs
==============================================
TCS Instructional Group - Programmer/Analyst 1
  Cory | Po | Danube | Volga | Parker | Torus
==============================================



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199502191918.LAA06025>