From owner-cvs-sys Sun Feb 19 17:54:38 1995 Return-Path: cvs-sys-owner Received: (from majordom@localhost) by freefall.cdrom.com (8.6.9/8.6.6) id RAA23099 for cvs-sys-outgoing; Sun, 19 Feb 1995 17:54:38 -0800 Received: from estienne.cs.berkeley.edu (estienne.CS.Berkeley.EDU [128.32.42.147]) by freefall.cdrom.com (8.6.9/8.6.6) with ESMTP id RAA23084; Sun, 19 Feb 1995 17:54:29 -0800 Received: (from gibbs@localhost) by estienne.cs.berkeley.edu (8.6.9/8.6.9) id LAA06025; Sun, 19 Feb 1995 11:18:18 -0800 From: "Justin T. Gibbs" Message-Id: <199502191918.LAA06025@estienne.cs.berkeley.edu> Subject: Re: cvs commit: src/sys/sys buf.h To: toor@jsdinc.root.com (John S. Dyson) Date: Sun, 19 Feb 1995 11:18:17 -0800 (PST) Cc: bde@zeta.org.au, bde@freefall.cdrom.com, CVS-commiters@freefall.cdrom.com, cvs-sys@freefall.cdrom.com In-Reply-To: <199502191755.MAA00244@jsdinc> from "John S. Dyson" at Feb 19, 95 12:55:02 pm X-Mailer: ELM [version 2.4 PL24] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Content-Length: 2447 Sender: cvs-sys-owner@freebsd.org Precedence: bulk > > > > > I'd like to see it used as an alternative method for clustering. Any > > device doing DMA needs the physical address of pages within a MAXPHYS > > buffer, so you end up looping through it. Why bother sticking block > > sized buffers into a virtually congious cluster buffer? Why not just link > > them together to form the transaction? This also allows controllers like > > the 27/28/2942 to do transactions up to 1meg in size (you'd want to limit > > it below that to prevent it from hogging the SCSI bus). What would be even > > better is some knowledge of what type of device you are talking to so that > > if it happens to be PIO, you can give it virtually contiguous chunks of the > > right size. > > > I agree with Justin, we simply "improved" upon the original 4.4 clustering > code, but his notion of chained buffers (that he had mentioned to me a > couple of months ago) appears to be very good. The code that we currently > have has an upper limit of 64K on the cluster size. The problem is that > with the current scheme, there is a system wide upper limit. Some devices > could probably benefit from increased cluster sizes. However, we might need > to be able to tune down the maximum I/O transfer sizes (esp, non > bus-mastering stuff), for realtime performance. FreeBSD is being used > in a couple of fairly time-critical applications, and it would be nice > someday to be able to support nearly real-time (1-5msecs at least) secheduling. > Not that we even approach that now (with page table pre-faulting and future IDE > multi-block clustering.) Without looking at increasing the max I/O size, will there be a perfomance gain in this approach? If so, it becomes cheep and easy to change the max I/O size based on benchmarks and individual needs. Will say doubling the max to 128k further compromise real time performance? What percentage of I/O transactions will even approach this size? > > All I am saying is that it would be nice to be able to make sure that the > new scheme is compatible with pseudo-real-time kernel performance. Perhaps > the clustering code is not the right-place to make sure that real-time > performance is not further compromised????? > > John > dyson@root.com -- Justin T. Gibbs ============================================== TCS Instructional Group - Programmer/Analyst 1 Cory | Po | Danube | Volga | Parker | Torus ==============================================